Research articles for the 2020-09-07

A Stock Prediction Model Based on DCNN
Qiao Zhou,Ningning Liu
arXiv

The prediction of a stock price has always been a challenging issue, as its volatility can be affected by many factors such as national policies, company financial reports, industry performance, and investor sentiment etc.. In this paper, we present a prediction model based on deep CNN and the candle charts, the continuous time stock information is processed. According to different information richness, prediction time interval and classification method, the original data is divided into multiple categories as the training set of CNN. In addition, the convolutional neural network is used to predict the stock market and analyze the difference in accuracy under different classification methods.

The results show that the method has the best performance when the forecast time interval is 20 days. Moreover, the Moving Average Convergence Divergence and three kinds of moving average are added as input. This method can accurately predict the stock trend of the US NDAQ exchange for 92.2%. Meanwhile, this article distinguishes three conventional classification methods to provide guidance for future research.



A graphical approach to carbon-efficient spot market scheduling for Power-to-X applications
Neeraj Bokde,Bo Tranberg,Gorm Bruun Andresen
arXiv

In the Paris agreement of 2015, it was decided to reduce the CO2 emissions of the energy sector to zero by 2050 and to restrict the global mean temperature increase to 1.5 degree Celcius above the pre-industrial level. Such commitments are possible only with practically CO2-free power generation based on variable renewable technologies. Historically, the main point of criticism regarding renewable power is the variability driven by weather dependence. Power-to-X systems, which convert excess power to other stores of energy for later use, can play an important role in offsetting the variability of renewable power production. In order to do so, however, these systems have to be scheduled properly to ensure they are being powered by low-carbon technologies. In this paper, we introduce a graphical approach for scheduling power-to-X plants in the day-ahead market by minimizing carbon emissions and electricity costs. This graphical approach is simple to implement and intuitively explain to stakeholders. In a simulation study using historical prices and CO2 intensity for four different countries, we find that the price and CO2 intensity tends to decrease with increasing scheduling horizon. The effect diminishes when requiring an increasing amount of full load hours per year. Additionally, investigating the trade-off between optimizing for price or CO2 intensity shows that it is indeed a trade-off: it is not possible to obtain the lowest price and CO2 intensity at the same time.



A tale of two sentiment scales: Disentangling short-run and long-run components in multivariate sentiment dynamics
Danilo Vassallo,Giacomo Bormetti,Fabrizio Lillo
arXiv

We propose a novel approach to sentiment data filtering for a portfolio of assets. In our framework, a dynamic factor model drives the evolution of the observed sentiment and allows to identify two distinct components: a long-term component, modeled as a random walk, and a short-term component driven by a stationary VAR(1) process. Our model encompasses alternative approaches available in literature and can be readily estimated by means of Kalman filtering and expectation maximization. This feature makes it convenient when the cross-sectional dimension of the portfolio increases. By applying the model to a portfolio of Dow Jones stocks, we find that the long term component co-integrates with the market principal factor, while the short term one captures transient swings of the market associated with the idiosyncratic components and captures the correlation structure of returns. Using quantile regressions, we assess the significance of the contemporaneous and lagged explanatory power of sentiment on returns finding strong statistical evidence when extreme returns, especially negative ones, are considered. Finally, the lagged relation is exploited in a portfolio allocation exercise.



Al\`os type decomposition formula for Barndorff-Nielsen and Shephard model
Takuji Arai
arXiv

The objective is to provide an Al\`os type decomposition formula of call option prices for the Barndorff-Nielsen and Shephard model: an Ornstein-Uhlenbeck type stochastic volatility model driven by a subordinator without drift. Al\`os (2012) introduced a decomposition expression for the Heston model by using Ito's formula. In this paper, we extend it to the Barndorff-Nielsen and Shephard model. As far as we know, this is the first result on the Al\`os type decomposition formula for models with infinite active jumps.



Bear Markets and Recessions versus Bull Markets and Expansions
Abdulnasser Hatemi-J
arXiv

This paper examines the dynamic interaction between falling and rising markets for both the real and the financial sectors of the largest economy in the world using asymmetric causality tests. These tests require that each underlying variable in the model be transformed into partial sums of the positive and negative components. The positive components represent the rising markets and the negative components embody the falling markets. The sample period covers some part of the COVID19 pandemic. Since the data is non normal and the volatility is time varying, the bootstrap simulations with leverage adjustments are used in order to create reliable critical values when causality tests are conducted. The results of the asymmetric causality tests disclose that the bear markets are causing the recessions as well as the bull markets are causing the economic expansions. The causal effect of bull markets on economic expansions is higher compared to the causal effect of bear markets on economic recessions. In addition, it is found that economic expansions cause bull markets but recessions do not cause bear markets. Thus, the policies that remedy the falling financial markets can also help the economy when it is in a recession.



Capturing dynamics of post-earnings-announcement drift using genetic algorithm-optimised supervised learning
Zhengxin Joseph Ye,Bjorn W. Schuller
arXiv

While Post-Earnings-Announcement Drift (PEAD) is one of the most studied stock market anomalies, the current literature is often limited in explaining this phenomenon by a small number of factors using simpler regression methods. In this paper, we use a machine learning based approach instead, and aim to capture the PEAD dynamics using data from a large group of stocks and a wide range of both fundamental and technical factors. Our model is built around the Extreme Gradient Boosting (XGBoost) and uses a long list of engineered input features based on quarterly financial announcement data from 1,106 companies in the Russell 1000 index between 1997 and 2018. We perform numerous experiments on PEAD predictions and analysis and have the following contributions to the literature. First, we show how Post-Earnings-Announcement Drift can be analysed using machine learning methods and demonstrate such methods' prowess in producing credible forecasting on the drift direction. It is the first time PEAD dynamics are studied using XGBoost. We show that the drift direction is in fact driven by different factors for stocks from different industrial sectors and in different quarters and XGBoost is effective in understanding the changing drivers. Second, we show that an XGBoost well optimised by a Genetic Algorithm can help allocate out-of-sample stocks to form portfolios with higher positive returns to long and portfolios with lower negative returns to short, a finding that could be adopted in the process of developing market neutral strategies. Third, we show how theoretical event-driven stock strategies have to grapple with ever changing market prices in reality, reducing their effectiveness. We present a tactic to remedy the difficulty of buying into a moving market when dealing with PEAD signals.



Closed-form approximations in multi-asset market making
Philippe Bergault,David Evangelista,Olivier Guéant,Douglas Vieira
arXiv

A large proportion of market making models derive from the seminal model of Avellaneda and Stoikov. The numerical approximation of the value function and the optimal quotes in these models remains a challenge when the number of assets is large. In this article, we propose closed-form approximations for the value functions of many multi-asset extensions of the Avellaneda-Stoikov model. These approximations or proxies can be used (i) as heuristic evaluation functions, (ii) as initial value functions in reinforcement learning algorithms, and/or (iii) directly to design quoting strategies through a greedy approach. Regarding the latter, our results lead to new and easily interpretable closed-form approximations for the optimal quotes, both in the finite-horizon case and in the asymptotic (ergodic) regime. Furthermore, we propose a perturbative approach to improve our closed-form approximations through Monte-Carlo simulations.



Dependent Conditional Value-at-Risk for Aggregate Risk Models
Bony Josaphat,Khreshna Syuhada
arXiv

Risk measure forecast and model have been developed in order to not only provide better forecast but also preserve its (empirical) property especially coherent property. Whilst the widely used risk measure of Value-at-Risk (VaR) has shown its performance and benefit in many applications, it is in fact not a coherent risk measure. Conditional VaR (CoVaR), defined as mean of losses beyond VaR, is one of alternative risk measures that satisfies coherent property. There has been several extensions of CoVaR such as Modified CoVaR (MCoVaR) and Copula CoVaR (CCoVaR). In this paper, we propose another risk measure, called Dependent CoVaR (DCoVaR), for a target loss that depends on another random loss, including model parameter treated as random loss. It is found that our DCoVaR outperforms than both MCoVaR and CCoVaR. Numerical simulation is carried out to illustrate the proposed DCoVaR. In addition, we do an empirical study of financial returns data to compute the DCoVaR forecast for heteroscedastic process.



Do Black and Indigenous Communities Receive their Fair Share of Vaccines Under the 2018 CDC Guidelines
Parag A. Pathak,Harald Schmidt,Adam Solomon,Edwin Song,Tayfun Sönmez,M. Utku Ünver
arXiv

A major focus of debate about rationing guidelines for COVID-19 vaccines is whether and how to prioritize access for minority populations that have been particularly affected by the pandemic, and been the subject of historical and structural disadvantage, particularly Black and Indigenous individuals. We simulate the 2018 CDC Vaccine Allocation guidelines using data from the American Community Survey under different assumptions on total vaccine supply. Black and Indigenous individuals combined receive a higher share of vaccines compared to their population share for all assumptions on total vaccine supply. However, their vaccine share under the 2018 CDC guidelines is considerably lower than their share of COVID-19 deaths and age-adjusted deaths. We then simulate one method to incorporate disadvantage in vaccine allocation via a reserve system. In a reserve system, units are placed into categories and units reserved for a category give preferential treatment to individuals from that category. Using the Area Deprivation Index (ADI) as a proxy for disadvantage, we show that a 40% high-ADI reserve increases the number of vaccines allocated to Black or Indigenous individuals, with a share that approaches their COVID-19 death share when there are about 75 million units. Our findings illustrate that whether an allocation is equitable depends crucially on the benchmark and highlight the importance of considering the expected distribution of outcomes from implementing vaccine allocation guidelines.



Effect of pop-up bike lanes on cycling in European cities
Sebastian Kraus,Nicolas Koch
arXiv

The bicycle is a low-cost means of transport linked to low risk of COVID-19 transmission. Governments have incentivized cycling by redistributing street space as part of their post-lockdown strategies. Here, we evaluate the impact of provisional bicycle infrastructure on cycling traffic in European cities. We scrape daily bicycle counts spanning over a decade from 736 bicycle counters in 106 European cities. We combine this with data on announced and completed pop-up bike lane road work projects. On average 11.5 kilometers of provisional pop-up bike lanes have been built per city. Each kilometer has increased cycling in a city by 0.6%. We calculate that the new infrastructure will generate $2.3 billion in health benefits per year, if cycling habits are sticky.



Gift Contagion in Online Groups: Evidence From WeChat Red Packets
Yuan Yuan,Tracy Liu,Chenhao Tan,Qian Chen,Alex Pentland,Jie Tang
arXiv

Our study seeks to identify the social contagion of in-group gifts: if gifts trigger their recipients to send more gifts subsequently, the actual impact of a gift on group dynamics would be greatly amplified. Causal identification of contagion is always challenging in observational data; To identify gift contagion, we leverage a natural experiment using a large sample of 36 million online red packets sent within 174,131 chat groups on WeChat, one of the largest social network services worldwide. Our natural experiment is enabled by WeChat's random gift amount algorithm, with which the amount that a user receives does not depend on her own attributes. We find that, on average, receiving one more dollar causes a recipient to send 18 cents back to the group within the subsequent 24 hours. Moreover, this effect is much stronger for "luckiest draw" recipients or those who receive the largest share from a red packet, suggesting a group norm according to which the luckiest draw recipients should send the first subsequent red packets. Additionally, we find that gift contagion is affected by in-group friendship network properties, such as the number of in-group friends and the local clustering coefficient.



Kelly Betting with Quantum Payoff: a continuous variable approach
Salvatore Tirone,Maddalena Ghio,Giulia Livieri,Vittorio Giovannetti,Stefano Marmi
arXiv

The main purpose of this study is to introduce a semi-classical model describing betting scenarios in which, at variance with conventional approaches, the payoff of the gambler is encoded into the internal degrees of freedom of a quantum memory element. In our scheme, we assume that the invested capital is explicitly associated with the free-energy (ergotropy) of a single mode of the electromagnetic radiation which, depending on the outcome of the betting, experiences attenuation or amplification processes which model losses and winning events. In particular, the evolution of the quantum memory results in a stochastic trajectory which we characterize within the theoretical setting of Bosonic Gaussian channels. As in the classical Kelly Criterion for optimal betting, we define the asymptotic doubling rate of the model and identify the optimal gambling strategy for fixed odds and probabilities of winning. The performance of the model are hence studied as a function of the input capital state under the assumption that the latter belongs to the set of Gaussian density matrices (i.e. displaced, squeezed thermal Gibbs states) revealing that the best option for the gambler is to devote all her/his initial resources into coherent state amplitude.



Kernel-based collocation methods for Heath-Jarrow-Morton models with Musiela parametrization
Yuki Kinoshita,Yumiharu Nakano
arXiv

We propose kernel-based collocation methods for numerical solutions to Heath-Jarrow-Morton models with Musiela parametrization. The methods can be seen as the Euler-Maruyama approximation of some finite dimensional stochastic differential equations, and allow us to compute the derivative prices by the usual Monte Carlo methods. We derive a bound on the rate of convergence under some decay condition on the inverse of the interpolation matrix and some regularity conditions on the volatility functionals.



Limit Order Book (LOB) shape modeling in presence of heterogeneously informed market participants
Mouhamad Drame
arXiv

The modeling of the limit order book is directly related to the assumptions on the behavior of real market participants. This paper is twofold. We first present empirical findings that lay the ground for two improvements to these models.The first one is concerned with market participants by adding the additional dimension of informed market makers, whereas the second, and maybe more original one, addresses the race in the book between informed traders and informed market makers leading to different shapes of the order book.

Namely we build an agent-based model for the order book with four types of market participants: informed trader, noise trader, informed market makers and noise market makers. We build our model based on the Glosten-Milgrom approach and the most recent Huang-Rosenbaum-Saliba approach. We introduce a parameter capturing the race between informed liquidity traders and suppliers after a new information on the fundamental value of the asset. We then derive the whole 'static' limit order book and its characteristics -- namely the bid-ask spread and volumes available at each level price -- from the interactions between the agents and compare it with the pre-existing model. We then discuss the case where noise traders have an impact on the fundamental value of the asset and extend the model to take into account many kinds of informed market makers.



Measuring the Input Rank in Global Supply Networks
Armando Rungi,Loredana Fattorini,Kenan Huremovic
arXiv

We introduce the Input Rank as a measure of relevance of direct and indirect suppliers in Global Value Chains. We conceive an intermediate input to be more relevant for a downstream buyer if a decrease in that input's productivity affects that buyer more. In particular, in our framework, the relevance of any input depends: i) on the network position of the supplier relative to the buyer, ii) the patterns of intermediate inputs vs labor intensities connecting the buyer and the supplier, iii) and the competitive pressures along supply chains. After we compute the Input Rank from both U.S. and world Input-Output tables, we provide useful insights on the crucial role of services inputs as well as on the relatively higher relevance of domestic suppliers and suppliers coming from regionally integrated partners. Finally, we test that the Input Rank is a good predictor of vertical integration choices made by 20,489 U.S. parent companies controlling 154,836 subsidiaries worldwide.



Multigrid Iterative Algorithms based on Compact Finite Difference Schemes and Hermite interpolation for Solving Regime Switching American Options
Chinonso Nwankwo,Weizhong Dai
arXiv

We present multigrid iterative algorithms for solving a system of coupled free boundary problems for pricing American put options with regime-switching. The algorithms are based on our recent developed compact finite difference scheme coupled with Hermite interpolation for solving the m coupled partial differential equations consisting of the asset, delta, gamma, and speed options. In the algorithms, we first use the Gauss-Seidel as a smoother, and then implement V-cycle and modified multigrid strategies for solving our discretized equations. Hermite interpolation with Newton interpolatory divided difference (as the basis) is used in estimating the coupled asset, delta, gamma, and speed options in the set of equations. A numerical experiment is performed with the two-regimes example and compared with other existing methods to validate the optimal strategy. Results show that these algorithms provide fast and efficient tools for pricing American put options with regime-switching.



Nice guys don't always finish last: succeeding in hierarchical organizations
Doron Klunover
arXiv

What are the chances of an ethical individual rising through the ranks of a political party or a corporation in the presence of unethical peers? To answer this question, I consider a four-player two-stage elimination tournament, in which players are partitioned into those willing to be involved in sabotage behavior and those who are not. I show that, under certain conditions, the latter are more likely to win the tournament.



Skewing Quanto with Simplicity
George Hong
arXiv

We present a simple and highly efficient analytical method for solving the Quanto Skew problem in Equities under a framework that accommodates both Equity and FX volatility skew consistently. Ease of implementation and extremely fast performance of this new approach should benefit a wide spectrum of market participants.



The Seven-League Scheme: Deep learning for large time step Monte Carlo simulations of stochastic differential equations
Shuaiqiang Liu,Lech A. Grzelak,Cornelis W. Oosterlee
arXiv

We propose an accurate data-driven numerical scheme to solve Stochastic Differential Equations (SDEs), by taking large time steps. The SDE discretization is built up by means of a polynomial chaos expansion method, on the basis of accurately determined stochastic collocation (SC) points. By employing an artificial neural network to learn these SC points, we can perform Monte Carlo simulations with large time steps. Error analysis confirms that this data-driven scheme results in accurate SDE solutions in the sense of strong convergence, provided the learning methodology is robust and accurate. With a variant method called the compression-decompression collocation and interpolation technique, we can drastically reduce the number of neural network functions that have to be learned, so that computational speed is enhanced. Numerical results shows the high quality strong convergence error results, when using large time steps, and the novel scheme outperforms some classical numerical SDE discretizations. Some applications, here in financial option valuation, are also presented.



Uncovering the Dynamics of Correlation Structures Relative to the Collective Market Motion
Anton J. Heckens,Sebastian M. Krause,Thomas Guhr
arXiv

The measured correlations of financial time series in subsequent epochs change considerably as a function of time. When studying the whole correlation matrices, quasi-stationary patterns, referred to as market states, are seen by applying clustering methods. They emerge, disappear or reemerge, but they are dominated by the collective motion of all stocks. In the jargon, one speaks of the market motion, it is always associated with the largest eigenvalue of the correlation matrices. Thus the question arises, if one can extract more refined information on the system by subtracting the dominating market motion in a proper way. To this end we introduce a new approach by clustering reduced-rank correlation matrices which are obtained by subtracting the dyadic matrix belonging to the largest eigenvalue from the standard correlation matrices. We analyze daily data of 262 companies of the S&P 500 index over a period of almost 15 years from 2002 to 2016. The resulting dynamics is remarkably different, and the corresponding market states are quasi-stationary over a long period of time. Our approach adds to the attempts to separate endogenous from exogenous effects.



Why Quantitative Structuring?
Andrei N. Soklakov
arXiv

Quality-designed consumer products are easy to recognize. Wouldn't it be great if the quality of financial products became just as apparent? This paper is addressed to financial practitioners. It provides an informal introduction to Quantitative Structuring -- a technology of manufacturing quality financial products (information derivatives). The presentation is arranged in three parts: the main text assumes no prior knowledge of the topic; important detailed discussions are arranged as a set of appendices; finally, a list of references provides further details including applications beyond product design: from model risk to economics and statistics.