Research articles for the 2019-10-13
arXiv
The efficient market hypothesis has been considered one of the most controversial arguments in finance, with the academia divided between who claims the impossibility of beating the market and who believes that it is possible to gain over the average profits. If the hypothesis holds, it means, as suggested by Burton Malkiel, that a blindfolded monkey selecting stocks by throwing darts at a newspaper's financial pages could perform as well as a financial analyst, or even better. In this paper we use a novel approach, based on confidence intervals for proportions, to assess the degree of inefficiency in the S&P 500 Index components concluding that several stocks are inefficient: we estimated the proportion of inefficient stocks in the index to be between 12.13% and 27.87%. This supports other studies proving that a financial analyst, probably, is a better investor than a blindfolded monkey.
arXiv
We propose a novel approach for causal mediation analysis based on changes-in-changes assumptions restricting unobserved heterogeneity over time. This allows disentangling the causal effect of a binary treatment on a continuous outcome into an indirect effect operating through a binary intermediate variable (called mediator) and a direct effect running via other causal mechanisms. We identify average and quantile direct and indirect effects for various subgroups under the condition that the outcome is monotonic in the unobserved heterogeneity and that the distribution of the latter does not change over time conditional on the treatment and the mediator. We also provide a simulation study and an empirical application to the Jobs II programme.
arXiv
We present a perturbation theory of the market impact based on an extension of the framework proposed by [Loeper, 2018] -- originally based on [Liu and Yong, 2005] -- in which we consider only local linear market impact. We study the execution process of hedging derivatives and show how these hedging metaorders can explain some stylized facts observed in the empirical market impact literature. As we are interested in the execution process of hedging we will establish that the arbitrage opportunities that exist in the discrete time setting vanish when the trading frequency goes to infinity letting us to derive a pricing equation. Furthermore our approach retrieves several results already established in the option pricing literature such that the spot dynamics modified by the market impact. We also study the relaxation of our hedging metaorders based on the fair pricing hypothesis and establish a relation between the immediate impact and the permanent impact which is in agreement with recent empirical studies on the subject.
arXiv
Considering event structure information has proven helpful in text-based stock movement prediction. However, existing works mainly adopt the coarse-grained events, which loses the specific semantic information of diverse event types. In this work, we propose to incorporate the fine-grained events in stock movement prediction. Firstly, we propose a professional finance event dictionary built by domain experts and use it to extract fine-grained events automatically from finance news. Then we design a neural model to combine finance news with fine-grained event structure and stock trade data to predict the stock movement. Besides, in order to improve the generalizability of the proposed method, we design an advanced model that uses the extracted fine-grained events as the distant supervised label to train a multi-task framework of event extraction and stock prediction. The experimental results show that our method outperforms all the baselines and has good generalizability.
arXiv
Productivity levels and growth are extremely heterogeneous among firms. A vast literature has developed to explain the origins of productivity shocks, their dispersion, evolution and their relationship to the business cycle. We examine in detail the distribution of labor productivity levels and growth, and observe that they exhibit heavy tails. We propose to model these distributions using the four parameter L\'{e}vy stable distribution, a natural candidate deriving from the generalised Central Limit Theorem. We show that it is a better fit than several standard alternatives, and is remarkably consistent over time, countries and sectors. In all samples considered, the tail parameter is such that the theoretical variance of the distribution is infinite, so that the sample standard deviation increases with sample size. We find a consistent positive skewness, a markedly different behaviour between the left and right tails, and a positive relationship between productivity and size. The distributional approach allows us to test different measures of dispersion and find that productivity dispersion has slightly decreased over the past decade.
arXiv
We develop a dual control method for approximating investment strategies in incomplete environments that emerge from the presence of trading constraints. Convex duality enables the approximate technology to generate lower and upper bounds on the optimal value function. The mechanism rests on closed-form expressions pertaining to the portfolio composition, whence we are able to derive the near-optimal asset allocation explicitly. In a real financial market, we illustrate the accuracy of our approximate method on a dual CRRA utility function that characterizes the preferences of some finite-horizon investor. Negligible duality gaps and insignificant annual welfare losses substantiate accuracy of the technique.
arXiv
We study the problem of dynamically trading multiple futures contracts with different underlying assets. To capture the joint dynamics of stochastic bases for all traded futures, we propose a new model involving a multi-dimensional scaled Brownian bridge that is stopped before price convergence. This leads to the analysis of the corresponding Hamilton-Jacobi-Bellman (HJB) equations, whose solutions are derived in semi-explicit form. The resulting optimal trading strategy is a long-short policy that accounts for whether the futures are in contango or backwardation. Our model also allows us to quantify and compare the values of trading in the futures markets when the underlying assets are traded or not. Numerical examples are provided to illustrate the optimal strategies and the effects of model parameters.
arXiv
Due to superstition, license plates with desirable combinations of characters are highly sought after in China, fetching prices that can reach into the millions in government-held auctions. Despite the high stakes involved, there has been essentially no attempt to provide price estimates for license plates. We present an end-to-end neural network model that simultaneously predict the auction price, gives the distribution of prices and produces latent feature vectors. While both types of neural network architectures we consider outperform simpler machine learning methods, convolutional networks outperform recurrent networks for comparable training time or model complexity. The resulting model powers our online price estimator and search engine.
arXiv
Guo and Zhu (2017) recently proposed an equal-risk pricing approach to the valuation of contingent claims when short selling is completely banned and two elegant pricing formulae are derived in some special cases. In this paper, we establish a unified framework for this new pricing approach so that its range of application can be significantly expanded. The main contribution of our framework is that it not only recovers the analytical pricing formula derived by Guo and Zhu (2017) when the payoff is monotonic, but also numerically produces equal-risk prices for contingent claims with non-monotonic payoffs, a task which has not been accomplished before. Furthermore, we demonstrate how a short selling ban affects the valuation of contingent claims by comparing equal-risk prices with Black-Scholes prices.
arXiv
This paper shows that the $q$-exponential function rationally evaluate the time discounting. When we consider two processes of wealth accumulation with different frequencies, then the discount rate and the relative frequency between them are essentials to choose the best process. In this context, the exponential discounting is a particular case, where one of the processes has a much higher frequency related to the other. In addition, one can note that some behaviors observed empirically in decision makers, such as subadditivity, magnitude effect, and preference reversal, are consistent with processes which have a low relative frequency.
arXiv
Quantitative finance has had a long tradition of a bottom-up approach to complex systems inference via multi-agent systems (MAS). These statistical tools are based on modelling agents trading via a centralised order book, in order to emulate complex and diverse market phenomena. These past financial models have all relied on so-called zero-intelligence agents, so that the crucial issues of agent information and learning, central to price formation and hence to all market activity, could not be properly assessed. In order to address this, we designed a next-generation MAS stock market simulator, in which each agent learns to trade autonomously via model-free reinforcement learning. We calibrate the model to real market data from the London Stock Exchange over the years 2007 to 2018, and show that it can faithfully reproduce key market microstructure metrics, such as various price autocorrelation scalars over multiple time intervals. Agent learning thus enables model emulation of the microstructure with superior realism.
arXiv
In the past, financial stock markets have been studied with previous generations of multi-agent systems (MAS) that relied on zero-intelligence agents, and often the necessity to implement so-called noise traders to sub-optimally emulate price formation processes. However recent advances in the fields of neuroscience and machine learning have overall brought the possibility for new tools to the bottom-up statistical inference of complex systems. Most importantly, such tools allows for studying new fields, such as agent learning, which in finance is central to information and stock price estimation. We present here the results of a new generation MAS stock market simulator, where each agent autonomously learns to do price forecasting and stock trading via model-free reinforcement learning, and where the collective behaviour of all agents decisions to trade feed a centralised double-auction limit order book, emulating price and volume microstructures. We study here what such agents learn in detail, and how heterogenous are the policies they develop over time. We also show how the agents learning rates, and their propensity to be chartist or fundamentalist impacts the overall market stability and agent individual performance. We conclude with a study on the impact of agent information via random trading.
arXiv
I propose a novel method, the Wasserstein Index Generation model (WIG), to generate a public sentiment index automatically. To test the model`s effectiveness, an application to generate Economic Policy Uncertainty (EPU) index is showcased.