Research articles for the 2020-04-05
arXiv
A collectivised fund is a proposed form of pension investment, in which all investors agree that any funds associated with deceased members should be split among survivors. For this to be a viable financial product, it is necessary to know how to manage the fund even when it is heterogeneous: that is when different investors have different preferences, wealth and mortality. There is no obvious way to define a single objective for a heterogeneous fund, so this is not an optimal control problem. In lieu of an objective function, we take an axiomatic approach. Subject to our axioms on the management of the fund, we find an upper bound on the utility that can be achieved for each investor, assuming a complete markets and the absence of systematic longevity risk. We give a strategy for the management of such heterogeneous funds which achieves this bound asymptotically as the number of investors tends to infinity.
arXiv
The recent advancements in computational power and machine learning algorithms have led to vast improvements in manifold areas of research. Especially in finance, the application of machine learning enables researchers to gain new insights into well-studied areas. In our paper, we demonstrate that unsupervised machine learning algorithms can be used to visualize and classify company data in an economically meaningful and effective way. In particular, we implement the t-distributed stochastic neighbor embedding (t-SNE) algorithm due to its beneficial properties as a data-driven dimension reduction and visualization tool in combination with spectral clustering to perform company classification. The resulting groups can then be implemented by experts in the field for empirical analysis and optimal decision making. By providing an exemplary out-of-sample study within a portfolio optimization framework, we show that meaningful grouping of stock data improves the overall portfolio performance. We, therefore, introduce the t-SNE algorithm to the financial community as a valuable technique both for researchers and practitioners.
arXiv
The popularity of deep reinforcement learning (DRL) methods in economics have been exponentially increased. DRL through a wide range of capabilities from reinforcement learning (RL) and deep learning (DL) for handling sophisticated dynamic business environments offers vast opportunities. DRL is characterized by scalability with the potential to be applied to high-dimensional problems in conjunction with noisy and nonlinear patterns of economic data. In this work, we first consider a brief review of DL, RL, and deep RL methods in diverse applications in economics providing an in-depth insight into the state of the art. Furthermore, the architecture of DRL applied to economic applications is investigated in order to highlight the complexity, robustness, accuracy, performance, computational tasks, risk constraints, and profitability. The survey results indicate that DRL can provide better performance and higher accuracy as compared to the traditional algorithms while facing real economic problems at the presence of risk parameters and the ever-increasing uncertainties.
arXiv
In this paper we propose a deep recurrent architecture for the probabilistic modelling of high-frequency market prices, important for the risk management of automated trading systems. Our proposed architecture incorporates probabilistic mixture models into deep recurrent neural networks. The resulting deep mixture models simultaneously address several practical challenges important in the development of automated high-frequency trading strategies that were previously neglected in the literature: 1) probabilistic forecasting of the price movements; 2) single objective prediction of both the direction and size of the price movements. We train our models on high-frequency Bitcoin market data and evaluate them against benchmark models obtained from the literature. We show that our model outperforms the benchmark models in both a metric-based test and in a simulated trading scenario
arXiv
In this paper we propose a deep recurrent model based on the order flow for the stationary modelling of the high-frequency directional prices movements. The order flow is the microsecond stream of orders arriving at the exchange, driving the formation of prices seen on the price chart of a stock or currency. To test the stationarity of our proposed model we train our model on data before the 2017 Bitcoin bubble period and test our model during and after the bubble. We show that without any retraining, the proposed model is temporally stable even as Bitcoin trading shifts into an extremely volatile "bubble trouble" period. The significance of the result is shown by benchmarking against existing state-of-the-art models in the literature for modelling price formation using deep learning.
arXiv
Prediction of stock groups' values has always been attractive and challenging for shareholders. This paper concentrates on the future prediction of stock market groups. Four groups named diversified financials, petroleum, non-metallic minerals and basic metals from Tehran stock exchange are chosen for experimental evaluations. Data are collected for the groups based on ten years of historical records. The values predictions are created for 1, 2, 5, 10, 15, 20 and 30 days in advance. The machine learning algorithms utilized for prediction of future values of stock market groups. We employed Decision Tree, Bagging, Random Forest, Adaptive Boosting (Adaboost), Gradient Boosting and eXtreme Gradient Boosting (XGBoost), and Artificial neural network (ANN), Recurrent Neural Network (RNN) and Long short-term memory (LSTM). Ten technical indicators are selected as the inputs into each of the prediction models. Finally, the result of predictions is presented for each technique based on three metrics. Among all the algorithms used in this paper, LSTM shows more accurate results with the highest model fitting ability. Also, for tree-based models, there is often an intense competition between Adaboost, Gradient Boosting, and XGBoost.
arXiv
The financial market trend forecasting method is emerging as a hot topic in financial markets today. Many challenges still currently remain, and various researches related thereto have been actively conducted. Especially, recent research of neural network-based financial market trend prediction has attracted much attention. However, previous researches do not deal with the financial market forecasting method based on LSTM which has good performance in time series data. There is also a lack of comparative analysis in the performance of neural network-based prediction techniques and traditional prediction techniques. In this paper, we propose a financial market trend forecasting method using LSTM and analyze the performance with existing financial market trend forecasting methods through experiments. This method prepares the input data set through the data preprocessing process so as to reflect all the fundamental data, technical data and qualitative data used in the financial data analysis, and makes comprehensive financial market analysis through LSTM. In this paper, we experiment and compare performances of existing financial market trend forecasting models, and performance according to the financial market environment. In addition, we implement the proposed method using open sources and platform and forecast financial market trends using various financial data indicators.
arXiv
Cross-impact, namely the fact that on average buy (sell) trades on a financial instrument induce positive (negative) price changes in other correlated assets, can be measured from abundant, although noisy, market data. In this paper we propose a principled approach that allows to perform model selection for cross-impact models, showing that symmetries and consistency requirements are particularly effective in reducing the universe of possible models to a much smaller set of viable candidates, thus mitigating the effect of noise on the properties of the inferred model. We review the empirical performance of a large number of cross-impact models, comparing their strengths and weaknesses on a number of asset classes (futures, stocks, calendar spreads). Besides showing which models perform better, we argue that in presence of comparable statistical performance, which is often the case in a noisy world, it is relevant to favor models that provide ex-ante theoretical guarantees on their behavior in limit cases. From this perspective, we advocate that the empirical validation of universal properties (symmetries, invariances) should be regarded as holding a much deeper epistemological value than any measure of statistical performance on specific model instances.
SSRN
We investigate how idiosyncratic lender shocks impact corporate investment. Lenders with recent default experience write stricter loan contracts, leading to a reduction in real investment for borrowing firms. The decline in investment is not attributable to loan riskiness, borrowers agency costs, the lender-borrower relationship nexus, or to lender capitalization. The findings remain robust when controlling for lender fragility, macroeconomic conditions, aggregate defaults in the economy, and excluding defaulters from the same industry or geographic region. The evidence suggests that defaults inform lenders about investment opportunities and their screening ability, and adjustments to this information have real economic consequences.
arXiv
This research paper explores the performance of Machine Learning (ML) algorithms and techniques that can be used for financial asset price forecasting. The prediction and forecasting of asset prices and returns remains one of the most challenging and exciting problems for quantitative finance and practitioners alike. The massive increase in data generated and captured in recent years presents an opportunity to leverage Machine Learning algorithms. This study directly compares and contrasts state-of-the-art implementations of modern Machine Learning algorithms on high performance computing (HPC) infrastructures versus the traditional and highly popular Capital Asset Pricing Model (CAPM) on U.S equities data. The implemented Machine Learning models - trained on time series data for an entire stock universe (in addition to exogenous macroeconomic variables) significantly outperform the CAPM on out-of-sample (OOS) test data.
arXiv
In this paper the zero vanna implied volatility approximation for the price of freshly minted volatility swaps is generalised to seasoned volatility swaps. We also derive how volatility swaps can be hedged using a strip of vanilla options with weights that are directly related to trading intuition. Additionally, we derive first and second order hedges for volatility swaps using only variance swaps. As dynamically trading variance swaps is in general cheaper and operationally less cumbersome compared to dynamically rebalancing a continuous strip of options, our result makes the hedging of volatility swaps both practically feasible and robust. Within the class of stochastic volatility models our pricing and hedging results are model-independent and can be implemented at almost no computational cost.
arXiv
This research develops a Machine Learning approach able to predict labor shortages for occupations. We compile a unique dataset that incorporates both Labor Demand and Labor Supply occupational data in Australia from 2012 to 2018. This includes data from 1.3 million job advertisements (ads) and 20 official labor force measures. We use these data as explanatory variables and leverage the XGBoost classifier to predict yearly labor shortage classifications for 132 standardized occupations. The models we construct achieve macro-F1 average performance scores of up to 86 per cent. However, the more significant findings concern the class of features which are most predictive of labor shortage changes. Our results show that job ads data were the most predictive features for predicting year-to-year labor shortage changes for occupations. These findings are significant because they highlight the predictive value of job ads data when they are used as proxies for Labor Demand, and incorporated into labor market prediction models. This research provides a robust framework for predicting labor shortages, and their changes, and has the potential to assist policy-makers and businesses responsible for preparing labor markets for the future of work.
arXiv
Recent proposal of Wasserstein Index Generation model (WIG) has shown a new direction for automatically generating indices. However, it is challenging in practice to fit large datasets for two reasons. First, the Sinkhorn distance is notoriously expensive to compute and suffers from dimensionality severely. Second, it requires to compute a full $N\times N$ matrix to be fit into memory, where $N$ is the dimension of vocabulary. When the dimensionality is too large, it is even impossible to compute at all. I hereby propose a Lasso-based shrinkage method to reduce dimensionality for the vocabulary as a pre-processing step prior to fitting the WIG model. After we get the word embedding from Word2Vec model, we could cluster these high-dimensional vectors by $k$-means clustering, and pick most frequent tokens within each cluster to form the "base vocabulary". Non-base tokens are then regressed on the vectors of base token to get a transformation weight and we could thus represent the whole vocabulary by only the "base tokens". This variant, called pruned WIG (pWIG), will enable us to shrink vocabulary dimension at will but could still achieve high accuracy. I also provide a \textit{wigpy} module in Python to carry out computation in both flavor. Application to Economic Policy Uncertainty (EPU) index is showcased as comparison with existing methods of generating time-series sentiment indices.
SSRN
Recent research on real options does not only consider optimal investment decisions under risk, but also under ambiguity. However, most models that allow for ambiguity are generally not dynamically consistent. Examples are, among others, the alpha-MEU model by Ghiradato et al. (2004) or the NMEU model, as recently used in Gao et al. (2018). Dynamic consistency is however required to solve real options problems an- alytically. This paper highlights the resulting difficulties, which are often overlooked, exemplarily for the NMEU model.
arXiv
The paper studies different regression approaches for modeling COVID-19 spread and its impact on the stock market. The logistic curve model was used with Bayesian regression for predictive analytics of the coronavirus spread. The impact of COVID-19 was studied using regression approach and compared to other crises influence. In practical analytics, it is important to find the maximum of coronavirus cases per day, this point means the estimated half time of coronavirus spread in the region under investigation. The obtained results show that different crises with different reasons have different impact on the same stocks. It is important to analyze their impact separately. Bayesian inference makes it possible to analyze the uncertainty of crisis impacts.
SSRN
We study the role of retail investors in the gradual diffusion of information in financial markets. We show that retail investors tend to trade as contrarians after large earnings surprises, and such contrarian trading contributes to sluggish price adjustment and to momentum. Retail traders are particularly active for small loser stocks. As a robustness check, we double sort stocks in quintiles based on momentum and the strength of retail contrarian trading, and find that the momentum phenomenon arises only in the 4th and 5th quintile of contrarian trading intensity. We further investigate the timing and the horizon of the traders, the role of segmentation in stock ownership, and the role of attention and browsing behavior in generating contrarian trading. Alternative hypotheses, such as the disposition effect, and stale limit orders, do not explain the phenomenon.
arXiv
The `Black Thursday' crisis in cryptocurrency markets demonstrated deleveraging risks in over-collateralized lending and stablecoins. We develop a stochastic model of over-collateralized stablecoins that helps explain such crises. In our model, the stablecoin supply is decided by speculators who optimize the profitability of a leveraged position while incorporating the forward-looking cost of collateral liquidations, which involves the endogenous price of the stablecoin. We formally characterize stable and unstable domains for the stablecoin. We prove bounds on the probabilities of large deviations and quadratic variation in the stable domain and distinctly greater price variance in the unstable domain. The unstable domain can be triggered by large deviations, collapsed expectations, and liquidity problems from deleveraging. We formally characterize a deflationary deleveraging spiral as a submartingale that can cause such liquidity problems in a crisis. We also demonstrate `perfect' stability results in idealized settings and discuss mechanisms which could bring realistic settings closer to the idealized stable settings.