Research articles for the 2019-12-24

A Gated Recurrent Unit Approach to Bitcoin Price Prediction
Aniruddha Dutta,Saket Kumar,Meheli Basu
arXiv

In today's era of big data, deep learning and artificial intelligence have formed the backbone for cryptocurrency portfolio optimization. Researchers have investigated various state of the art machine learning models to predict Bitcoin price and volatility. Machine learning models like recurrent neural network (RNN) and long short-term memory (LSTM) have been shown to perform better than traditional time series models in cryptocurrency price prediction. However, very few studies have applied sequence models with robust feature engineering to predict future pricing. in this study, we investigate a framework with a set of advanced machine learning methods with a fixed set of exogenous and endogenous factors to predict daily Bitcoin prices. We study and compare different approaches using the root mean squared error (RMSE). Experimental results show that gated recurring unit (GRU) model with recurrent dropout performs better better than popular existing models. We also show that simple trading strategies, when implemented with our proposed GRU model and with proper learning, can lead to financial gain.



Forecasting Implied Volatility Smile Surface via Deep Learning and Attention Mechanism
Shengli Chen,Zili Zhang
arXiv

The implied volatility smile surface is the basis of option pricing, and the dynamic evolution of the option volatility smile surface is difficult to predict. In this paper, attention mechanism is introduced into LSTM, and a volatility surface prediction method combining deep learning and attention mechanism is pioneeringly established. LSTM's forgetting gate makes it have strong generalization ability, and its feedback structure enables it to characterize the long memory of financial volatility. The application of attention mechanism in LSTM networks can significantly enhance the ability of LSTM networks to select input features. The experimental results show that the two strategies constructed using the predicted implied volatility surfaces have higher returns and Sharpe ratios than that the volatility surfaces are not predicted. This paper confirms that the use of AI to predict the implied volatility surface has theoretical and economic value. The research method provides a new reference for option pricing and strategy.



Healthy Access for Healthy Places: A Multidimensional Food Access Measure
Irena Gao,Marynia Kolak
arXiv

When it comes to preventive healthcare, place matters. It is increasingly clear that social factors, particularly reliable access to healthy food, are as determinant to health and health equity as medical care. However, food access studies often only present one-dimensional measurements of access. We hypothesize that food access is a multidimensional concept and evaluated Penchansky and Thomas's 1981 definition of access. In our approach, we identify ten variables contributing to food access in the City of Chicago and use principal component analysis to identify vulnerable populations with low access. Our results indicate that within the urban environment of the case study site, affordability is the most important factor in low food accessibility, followed by urban youth, reduced mobility, and higher immigrant population.



Learning the dynamics of technical trading strategies
Nicholas Murphy,Tim Gebbie
arXiv

We use an adversarial expert based online learning algorithm to learn the optimal parameters required to maximise wealth trading zero-cost portfolio strategies. The learning algorithm is used to determine the relative population dynamics of technical trading strategies that can survive historical back-testing as well as form an overall aggregated portfolio trading strategy from the set of underlying trading strategies implemented on daily and intraday Johannesburg Stock Exchange data. The resulting population time-series are investigated using unsupervised learning for dimensionality reduction and visualisation. A key contribution is that the overall aggregated trading strategies are tested for statistical arbitrage using a novel hypothesis test proposed by Jarrow et al. (2012) on both daily sampled and intraday time-scales. The (low frequency) daily sampled strategies fail the arbitrage tests after costs, while the (high frequency) intraday sampled strategies are not falsified as statistical arbitrages after costs. The estimates of trading strategy success, cost of trading and slippage are considered along with an online benchmark portfolio algorithm for performance comparison. In addition, the algorithms generalisation error is analysed by recovering a probability of back-test overfitting estimate using a nonparametric procedure introduced by Bailey et al. (2016). The work aims to explore and better understand the interplay between different technical trading strategies from a data-informed perspective.



Online Quantification of Input Model Uncertainty by Two-Layer Importance Sampling
Tianyi Liu,Enlu Zhou
arXiv

Stochastic simulation has been widely used to analyze the performance of complex stochastic systems and facilitate decision making in those systems. Stochastic simulation is driven by the input model, which is a collection of probability distributions that model the stochasticity in the system. The input model is usually estimated using a finite amount of data, which introduces the so-called input model uncertainty (or, input uncertainty for short) to the simulation output. How to quantify input uncertainty has been studied extensively, and many methods have been proposed for the batch data setting, i.e., when all the data are available at once. However, methods for ``streaming data'' arriving sequentially in time are still in demand, despite that streaming data have become increasingly prevalent in modern applications. To fill in this gap, we propose a two-layer importance sampling framework that incorporates streaming data for online input uncertainty quantification. Under this framework, we develop two algorithms that suit two different application scenarios: the first is when data come at a fast speed and there is no time for any simulation in between updates; the second is when data come at a moderate speed and a few but limited simulations are allowed at each time stage. We show the consistency and asymptotic convergence rate results, which theoretically show the efficiency of our proposed approach. We further demonstrate the proposed algorithms on an example of the news vendor problem.



Predicting one type of technological motion? A nonlinear map to study the 'sailing-ship' effect
G. Filatrella,N. De Liso
arXiv

In this work we use a proven model to study a dynamic duopolistic competition between an old and a new technology which, through improved technical performance - e.g. data transmission capacity - fight in order to conquer market share. The process whereby an old technology fights a new one off through own improvements has been named 'sailing-ship effect'. In the simulations proposed, intentional improvements of both the old and the new technology are affected by the values of three key parameters: one scientific-technological, one purely technological and the third purely economic. The interaction between these components gives rise to different outcomes in terms of prevalence of one technology over the other.



Pricing and hedging American-style options with deep learning
Sebastian Becker,Patrick Cheridito,Arnulf Jentzen
arXiv

This paper describes a deep learning method for pricing and hedging American-style options. It first computes a candidate optimal stopping policy. From there it derives a lower bound for the price. Then it calculates an upper bound, a point estimate and confidence intervals. Finally, it constructs an approximate dynamic hedging strategy. We test the approach on different specifications of a Bermudan max-call option. In all cases it produces highly accurate prices and dynamic hedging strategies yielding small hedging errors.



Quantifying the Effects of the 2008 Recession using the Zillow Dataset
Arunav Gupta,Lucas Nguyen,Camille Dunning,Ka Ming Chan
arXiv

This report explores the use of Zillow's housing metrics dataset to investigate the effects of the 2008 US subprime mortgage crisis on various US locales. We begin by exploring the causes of the recession and the metrics available to us in the dataset. We settle on using the Zillow Home Value Index (ZHVI) because it is seasonally adjusted and able to account for a variety of inventory factors. Then, we explore three methodologies for quantifying recession impact: (a) Principal Components Analysis, (b) Area Under Baseline, and (c) ARIMA modeling and Confidence Intervals. While PCA does not yield useable results, we ended up with six cities from both AUB and ARIMA analysis, the top 3 "losers" and "gainers" of the 2008 recession, as determined by each analysis. This gave us 12 cities in total. Finally, we tested the robustness of our analysis against three "common knowledge" metrics for the recession: geographic clustering, population trends, and unemployment rate. While we did find some overlap between the results of our analysis and geographic clustering, there was no positive regression outcome from comparing our methodologies to population trends and the unemployment rate.



Relation between non-exchangeability and measures of concordance of copulas
Damjana Kokol Bukovšek,Tomaž Košir,Blaž Mojškerc,Matjaž Omladič
arXiv

An investigation is presented of how a comprehensive choice of five most important measures of concordance (namely Spearman's rho, Kendall's tau, Gini's gamma, Blomqvist's beta, and their weaker counterpart Spearman's footrule) relate to non-exchangeability, i.e., asymmetry on copulas. Besides these results, the method proposed also seems to be new and may serve as a raw model for exploration of the relationship between a specific property of a copula and some of its measures of dependence structure, or perhaps the relationship between various measures of dependence structure themselves.



Summary of the Report of the Study Group on Legal Issues regarding Central Bank Digital Currency
Hayashi, Kenji,Takano, Hiroyuki,Chiba, Makoto,Takamoto, Yasuhiro
RePEC
This article introduces the main findings of the Report of the Study Group on Legal Issues regarding Central Bank Digital Currency (CBDC). Based on four stylized models of CBDC issuance, the Report discusses what legal issues would arise within the Japanese legal framework if the Bank of Japan were to issue its own CBDC.

The Dynamics of Financial Markets: Fibonacci numbers, Elliott waves, and solitons
Inga Ivanova
arXiv

In this paper information theoretical approach is applied to the description of financial markets. A model which is expected to describe the markets dynamics is presented. It is shown the possibility to describe market trend and cycle dynamics from a unified viewpoint. The model predictions comparatively well suit Fibonacci ratios and numbers used for the analysis of market price and time projections. It proves possible to link time and price projections, thus allowing increase the accuracy of predicting well in advance the moment of trend termination. The model is tested against real data from the stock and financial markets.