Research articles for the 2020-09-13

A Dual Characterisation of Regulatory Arbitrage for Coherent Risk Measures
Martin Herdegen,Nazem Khan
arXiv

We revisit mean-risk portfolio selection in a one-period financial market where risk is quantified by a positively homogeneous risk measure $\rho$ on $L^1$. We first show that under mild assumptions, the set of optimal portfolios for a fixed return is nonempty and compact. However, unlike in classical mean-variance portfolio selection, it can happen that no efficient portfolios exist. We call this situation regulatory arbitrage, and prove that it cannot be excluded - unless $\rho$ is as conservative as the worst-case risk measure.

After providing a primal characterisation, we focus our attention on coherent risk measures, and give a necessary and sufficient characterisation for regulatory arbitrage. We show that the presence or absence of regulatory arbitrage for $\rho$ is intimately linked to the interplay between the set of equivalent martingale measures (EMMs) for the discounted risky assets and the set of absolutely continuous measures in the dual representation of $\rho$. A special case of our result shows that the market does not admit regulatory arbitrage for Expected Shortfall at level $\alpha$ if and only if there exists an EMM $\mathbb{Q} \approx \mathbb{P}$ such that $\Vert \frac{\text{d}\mathbb{Q}}{\text{d}\mathbb{P}} \Vert_{\infty} < \frac{1}{\alpha}$.



A deep learning approach for computations of exposure profiles for high-dimensional Bermudan options
Kristoffer Andersson,Cornelis Oosterlee
arXiv

In this paper, we propose a neural network-based method for approximating expected exposures and potential future exposures of Bermudan options. In a first phase, the method relies on the Deep Optimal Stopping algorithm, which learns the optimal stopping rule from Monte-Carlo samples of the underlying risk factors. Cashflow-paths are then created by applying the learned stopping strategy on a new set of realizations of the risk factors. Furthermore, in a second phase the risk factors are regressed against the cashflow-paths to obtain approximations of pathwise option values. The regression step is carried out by ordinary least squares as well as neural networks, and it is shown that the latter produces more accurate approximations.

The expected exposure is formulated, both in terms of the cashflow-paths and in terms of the pathwise option values and it is shown that a simple Monte-Carlo average yields accurate approximations in both cases. The potential future exposure is estimated by the empirical $\alpha$-percentile.

Finally, it is shown that the expected exposures, as well as the potential future exposures can be computed under either, the risk neutral measure, or the real world measure, without having to re-train the neural networks.



Automated Market Makers for Decentralized Finance (DeFi)
Yongge Wang
arXiv

This paper compares mathematical models for automated market makers including logarithmic market scoring rule (LMSR), liquidity sensitive LMSR (LS-LMSR), constant product/mean/sum, and others. It is shown that though LMSR may not be a good model for Decentralized Finance (DeFi) applications, LS-LMSR has several advantages over constant product/mean based automated market makers. However, LS-LMSR requires complicated computation (i.e., logarithm and exponentiation) and the cost function curve is concave. In certain DeFi applications, it is preferred to have computationally efficient cost functions with convex curves to conform with the principle of supply and demand. This paper proposes and analyzes constant circle/ellipse based cost functions for automated market makers. The proposed cost functions are computationally efficient (only requires multiplication and square root calculation) and have several advantages over widely deployed constant product cost functions. For example, the proposed market makers are more robust against front-runner (slippage) attacks.



Bibliometric indices as a measure of long-term competitive balance in knockout tournaments
László Csató,Dóra Gréta Petróczy
arXiv

We argue for the application of bibliometric indices to quantify long-term uncertainty of outcome in sports. The Euclidean index is proposed to reward quality over quantity, while the rectangle index can be an appropriate measure of core performance. Their differences are highlighted through an axiomatic analysis and several examples. Our approach also requires a weighting scheme to compare different achievements. The methodology is illustrated by studying the knockout stage of the UEFA Champions League in the 16 seasons played between 2003 and 2019: club and country performances as well as three types of competitive balance are considered. Measuring competition at the level of national associations is a novelty. All results are remarkably robust concerning the bibliometric index and the assigned weights. Inequality has not increased among the elite clubs and between the national associations, however, it has changed within some countries. Since the performances of national associations are more stable than the results of individual clubs, it would be better to build the seeding in the UEFA Champions League group stage upon association coefficients adjusted for league finishing positions rather than club coefficients.



Cognitive Abilities in the Wild: Population-scale game-based cognitive assessment
Mads Kock Pedersen,Carlos Mauricio Castaño Díaz,Mario Alejandro Alba-Marrugo,Ali Amidi,Rajiv Vaid Basaiawmoit,Carsten Bergenholtz,Morten H. Christiansen,Miroslav Gajdacz,Ralph Hertwig,Byurakn Ishkhanyan,Kim Klyver,Nicolai Ladegaard,Kim Mathiasen,Christine Parsons,Michael Bang Petersen,Janet Rafner,Anders Ryom Villadsen,Mikkel Wallentin,Jacob Friis Sherson,Skill Lab players
arXiv

Psychology and the social sciences are undergoing a revolution: It has become increasingly clear that traditional lab-based experiments fail to capture the full range of differences in cognitive abilities and behaviours across the general population. Some progress has been made toward devising measures that can be applied at scale across individuals and populations. What has been missing is a broad battery of validated tasks that can be easily deployed, used across different age ranges and social backgrounds, and employed in practical, clinical, and research contexts. Here, we present Skill Lab, a game-based approach allowing the efficient assessment of a suite of cognitive abilities. Skill Lab has been validated outside the lab in a crowdsourced population-size sample recruited in collaboration with the Danish Broadcast Company (Danmarks Radio, DR). Our game-based measures are five times faster to complete than the equivalent traditional measures and replicate previous findings on the decline of cognitive abilities with age in a large population sample. Furthermore, by combining the game data with an in-game survey, we demonstrate that this unique dataset has implication for key questions in social science, challenging the Jack-of-all-Trades theory of entrepreneurship and provide evidence for risk preference being independent of executive functioning.



Corrigendum for "Second-order reflected backward stochastic differential equations" and "Second-order BSDEs with general reflection and game options under uncertainty"
Anis Matoussi,Dylan Possamaï,Chao Zhou
arXiv

The aim of this short note is to fill in a gap in our earlier paper [16] on 2BSDEs with reflections, and to explain how to correct the subsequent results in the second paper [15]. We also provide more insight on the properties of 2RBSDEs, in the light of the recent contributions [13, 23] in the so--called $G-$framework.



Forecasting the Leading Indicator of a Recession: The 10-Year minus 3-Month Treasury Yield Spread
Sudiksha Joshi
arXiv

In this research paper, I have applied various econometric time series and two machine learning models to forecast the daily data on the yield spread. First, I decomposed the yield curve into its principal components, then simulated various paths of the yield spread using the Vasicek model. After constructing univariate ARIMA models, and multivariate models such as ARIMAX, VAR, and Long Short Term Memory, I calibrated the root mean squared error to measure how far the results deviate from the current values. Through impulse response functions, I measured the impact of various shocks on the difference yield spread. The results indicate that the parsimonious univariate ARIMA model outperforms the richly parameterized VAR method, and the complex LSTM with multivariate data performs equally well as the simple ARIMA model.



Growth of Global Corporate Debt: Main Facts and Policy Challenges
Abraham, Facundo,Cortina Lorente, Juan Jose,Schmukler, Sergio L.
SSRN
This paper surveys the literature to document the main stylized facts, risks, and policy challenges related to the expansion of global nonfinancial corporate debt after the 2008â€"09 global financial crisis. Nonfinancial corporate debt steadily increased after the crisis, especially in emerging economies. Between 2008 and 2018, corporate debt increased from 56 to 96 percent of gross domestic product in emerging economies, whereas this ratio remained stable in developed economies. Nonfinancial corporate debt was mainly issued through bond markets, and its growth can be largely attributed to accommodative monetary policies in developed economies. Whereas increased debt financing has some positive aspects, it has also amplified firms’ solvency risks and exposure to changes in market conditions, such as the economic downturn triggered by the COVID-19 pandemic. Because capital markets have a larger role in firm financing, policy makers have limited tools to mitigate the risks of growing firm debt.

Joint Modelling and Calibration of SPX and VIX by Optimal Transport
Ivan Guo,Gregoire Loeper,Jan Obloj,Shiyi Wang
arXiv

This paper addresses the joint calibration problem of SPX options and VIX options or futures. We show that the problem can be formulated as a semimartingale optimal transport problem under a finite number of discrete constraints, in the spirit of [arXiv:1906.06478]. We introduce a PDE formulation along with its dual counterpart. The solution, a calibrated diffusion process, can be represented via the solutions of Hamilton-Jacobi-Bellman equations arising from the dual formulation. The method is tested on both simulated data and market data. Numerical examples show that the model can be accurately calibrated to SPX options, VIX options and VIX futures simultaneously.



Neglecting Uncertainties Biases House-Elevation Decisions to Manage Riverine Flood Risks
Mahkameh Zarekarizi,Vivek Srikrishnan,Klaus Keller
arXiv

Homeowners around the world elevate houses to manage flood risks. Deciding how high to elevate a house poses a nontrivial decision problem. The U.S. Federal Emergency Management Agency (FEMA) recommends elevating existing houses to the Base Flood Elevation (the elevation of the 100-yr flood) plus a freeboard. This recommendation neglects many uncertainties. Here we analyze a case-study of riverine flood risk management using a multi-objective robust decision-making framework in the face of deep uncertainties. While the quantitative results are location-specific, the approach and overall insights are generalizable. We find strong interactions between the economic, engineering, and Earth science uncertainties, illustrating the need for expanding on previous integrated analyses to further understand the nature and strength of these connections. Considering deep uncertainties surrounding flood hazards, the discount rate, the house lifetime, and the fragility can increase the economically optimal house elevation to values well above FEMA recommendation.



Object Recognition for Economic Development from Daytime Satellite Imagery
Klaus Ackermann,Alexey Chernikov,Nandini Anantharama,Miethy Zaman,Paul A Raschky
arXiv

Reliable data about the stock of physical capital and infrastructure in developing countries is typically very scarce. This is particular a problem for data at the subnational level where existing data is often outdated, not consistently measured or coverage is incomplete. Traditional data collection methods are time and labor-intensive costly, which often prohibits developing countries from collecting this type of data. This paper proposes a novel method to extract infrastructure features from high-resolution satellite images. We collected high-resolution satellite images for 5 million 1km $\times$ 1km grid cells covering 21 African countries. We contribute to the growing body of literature in this area by training our machine learning algorithm on ground-truth data. We show that our approach strongly improves the predictive accuracy. Our methodology can build the foundation to then predict subnational indicators of economic development for areas where this data is either missing or unreliable.



Sanction or Financial Crisis? An Artificial Neural Network-Based Approach to model the impact of oil price volatility on Stock and industry indices
Somayeh Kokabisaghi,Mohammadesmaeil Ezazi,Reza Tehrani,Nourmohammad Yaghoubi
arXiv

In this paper, we model the impact of oil price volatility on Tehranstock and industry indices in two periods of international sanctions and post-sanction. To analyse the purpose of study, we use Feed-forward neural net-works. The period of study is from 2008 to 2018 that is split in two periods during international energy sanction and post-sanction. The results show that Feed-forward neural networks perform well in predicting stock market and industry, which means oil price volatility has a significant impact on stock and industry market indices. During post-sanction and global financial crisis, the model performs better in predicting industry index. Additionally, oil price-stock market index prediction performs better in the period of international sanctions. Herein, these results are, up to some extent, important for financial market analysts and policy makers to understand which factors and when influence the financial market, especially in an oil-dependent country such asIran with uncertainty in the international politics. Keywords: Feed-forward neural networks,Industry index,International energy sanction,Oil price volatility,Tehran stock index



Scenario Forecast of Cross-border Electric Interconnection towards Renewables in South America
Wenhao Wang,Jing Meng,Duan Chen,Wei Cong
arXiv

Cross-border Electric Interconnection towards renewables is a promising solution for electric sector under the UN 2030 sustainable development goals which is widely promoted in emerging economies. This paper comprehensively investigates state of art in renewable resources and cross-border electric interconnection in South America. Based on the raw data collected from typical countries, a long-term scenario forecast methodology is applied to estimate key indicators of electric sector in target years, comparing the prospects of active promoting cross-border Interconnections Towards Renewables (ITR) scenario with Business as Usual (BAU) scenario in South America region. Key indicators including peak load, installed capacity, investment, and generation cost are forecasted and comparative analyzed by year 2035 and 2050. The comparative data analysis shows that by promoting cross-border interconnection towards renewables in South America, renewable resources can be highly utilized for energy supply, energy matrix can be optimized balanced, economics can be obviously driven and generation cost can be greatly reduced.



Time series copula models using d-vines and v-transforms: an alternative to GARCH modelling
Martin Bladt,Alexander J. McNeil
arXiv

An approach to modelling volatile financial return series using d-vine copulas combined with uniformity preserving transformations known as v-transforms is proposed. By generalizing the concept of stochastic inversion of v-transforms, models are obtained that can describe both stochastic volatility in the magnitude of price movements and serial correlation in their directions. In combination with parametric marginal distributions it is shown that these models can rival and sometimes outperform well-known models in the extended GARCH family.



To snipe or not to snipe, that is the question! Transitions in sniping behaviour among competing algorithmic traders
Somayeh Kokabisaghi,Eric J Pauwels,Andre B Dorsman
arXiv

In this paper we extend the investigation into the transition from sure to probabilistic sniping as introduced in Menkveld and Zoican \cite{mz2017}. In that paper, the authors introduce a stylized version of a competitive game in which high frequency traders (HFTs) interact with each other and liquidity traders. The authors then show that risk aversion plays an important role in the transition from sure to mixed (or probabilistic) sniping. In this paper, we re-interpret and extend these conclusions in the context of repeated games and highlight some differences in results. In particular, we identify situations in which probabilistic sniping is genuinely profitable that are qualitatively different from the ones obtained in \cite{mz2017}. It turns out that beyond a specific risk aversion threshold the game resembles the well-known prisoner's dilemma, in that probabilistic sniping becomes a way to cooperate among the HFTs that leaves all the participants better off. In order to turn this into a viable strategy for the repeated game, we show how compliance can be monitored through the use of sequential statistical testing. Keywords: algorithmic trading, bandits, high-frequency exchange, Nash equilibrium, repeated games, sniping, subgame-perfect equilibrium, Sequential probability ratio, transition



Volatility Forecasting with 1-dimensional CNNs via transfer learning
Bernadett Aradi,Gábor Petneházi,József Gáll
arXiv

Volatility is a natural risk measure in finance as it quantifies the variation of stock prices. A frequently considered problem in mathematical finance is to forecast different estimates of volatility. What makes it promising to use deep learning methods for the prediction of volatility is the fact, that stock price returns satisfy some common properties, referred to as `stylized facts'. Also, the amount of data used can be high, favoring the application of neural networks. We used 10 years of daily prices for hundreds of frequently traded stocks, and compared different CNN architectures: some networks use only the considered stock, but we tried out a construction which, for training, uses much more series, but not the considered stocks. Essentially, this is an application of transfer learning, and its performance turns out to be much better in terms of prediction error. We also compare our dilated causal CNNs to the classical ARIMA method using an automatic model selection procedure.