Research articles for the 2020-03-29
arXiv
We present a solution to an optimal stopping problem for a process with a wide-class of novel dynamics. The dynamics model the support/resistance line concept from financial technical analysis.
arXiv
We construct a data-driven statistical indicator for quantifying the tail risk perceived by the EURGBP option market surrounding Brexit-related events. We show that under lognormal SABR dynamics this tail risk is closely related to the so-called martingale defect and provide a closed-form expression for this defect which can be computed by solving an inverse calibration problem. In order to cope with the the uncertainty which is inherent to this inverse problem, we adopt a Bayesian statistical parameter estimation perspective. We probe the resulting posterior densities with a combination of optimization and adaptive Markov chain Monte Carlo methods, thus providing a careful uncertainty estimation for all of the underlying parameters and the martingale defect indicator. Finally, to support the feasibility of the proposed method, we provide a Brexit "fever curve" for the year 2019.
arXiv
Challenge Theory (Shye & Haber 2015; 2020) has demonstrated that a newly devised challenge index (CI) attributable to every binary choice problem predicts the popularity of the bold option, the one of lower probability to gain a higher monetary outcome (in a gain problem); and the one of higher probability to lose a lower monetary outcome (in a loss problem). In this paper we show how Facet Theory structures the choice-behavior concept-space and yields rationalized measurements of gambling behavior. The data of this study consist of responses obtained from 126 student, specifying their preferences in 44 risky decision problems. A Faceted Smallest Space Analysis (SSA) of the 44 problems confirmed the hypothesis that the space of binary risky choice problems is partitionable by two binary axial facets: (a) Type of Problem (gain vs. loss); and (b) CI (Low vs. High). Four composite variables, representing the validated constructs: Gain, Loss, High-CI and Low-CI, were processed using Multiple Scaling by Partial Order Scalogram Analysis with base Coordinates (POSAC), leading to a meaningful and intuitively appealing interpretation of two necessary and sufficient gambling-behavior measurement scales.
arXiv
We present two methods, based on Chebyshev tensors, to compute dynamic sensitivities of financial instruments within a Monte Carlo simulation. These methods are implemented and run in a Monte Carlo engine to compute Dynamic Initial Margin as defined by ISDA (SIMM). We show that the levels of accuracy, speed and implementation efforts obtained, compared to the benchmark (DIM obtained calling pricing functions such as are found in risk engines), are better than those obtained by alternative methods presented in the literature, such as regressions (\cite{Zhu Chan}) and Deep Neural Nets (\cite{DNNs IM}).
arXiv
We propose a neural network approach to price EU call options that significantly outperforms some existing pricing models and comes with guarantees that its predictions are economically reasonable. To achieve this, we introduce a class of gated neural networks that automatically learn to divide-and-conquer the problem space for robust and accurate pricing. We then derive instantiations of these networks that are 'rational by design' in terms of naturally encoding a valid call option surface that enforces no arbitrage principles. This integration of human insight within data-driven learning provides significantly better generalisation in pricing performance due to the encoded inductive bias in the learning, guarantees sanity in the model's predictions, and provides econometrically useful byproduct such as risk neutral density.
arXiv
An approach to the modelling of financial return series using a class of uniformity-preserving transforms for uniform random variables is proposed. V-transforms describe the relationship between quantiles of the return distribution and quantiles of the distribution of a predictable volatility proxy variable constructed as a function of the return. V-transforms can be represented as copulas and permit the formulation and estimation of models that combine arbitrary marginal distributions with linear or non-linear time series models for the dynamics of the volatility proxy. The idea is illustrated using a transformed Gaussian ARMA process for volatility, yielding the class of VT-ARMA copula models. These can replicate many of the stylized facts of financial return series and facilitate the calculation of marginal and conditional characteristics of the model including quantile measures of risk. Estimation of models is carried out by adapting the exact maximum likelihood approach to the estimation of ARMA processes.
arXiv
Life insurance cash flows become reserve dependent when contract conditions are modified during the contract term on condition that actuarial equivalence is maintained. As a result, insurance cash flows and prospective reserves depend on each other in a circular way, and it is a non-trivial problem to solve that circularity and make cash flows and prospective reserves well-defined. In Markovian models, the (stochastic) Thiele equation and the Cantelli Theorem are the standard tools for solving the circularity issue and for maintaining actuarial equivalence. This paper expands the stochastic Thiele equation and the Cantelli Theorem to non-Markovian frameworks and presents a recursive scheme for the calculation of multiple contract modifications.
arXiv
When ranking big data observations such as colleges in the United States, diverse consumers reveal heterogeneous preferences. The objective of this paper is to sort out a linear ordering for these observations and to recommend strategies to improve their relative positions in the ranking. A properly sorted solution could help consumers make the right choices, and governments make wise policy decisions. Previous researchers have applied exogenous weighting or multivariate regression approaches to sort big data objects, ignoring their variety and variability. By recognizing the diversity and heterogeneity among both the observations and the consumers, we instead apply endogenous weighting to these contradictory revealed preferences. The outcome is a consistent steady-state solution to the counterbalance equilibrium within these contradictions. The solution takes into consideration the spillover effects of multiple-step interactions among the observations. When information from data is efficiently revealed in preferences, the revealed preferences greatly reduce the volume of the required data in the sorting process. The employed approach can be applied in many other areas, such as sports team ranking, academic journal ranking, voting, and real effective exchange rates.
arXiv
When innovations compete for adoption, chance historical events can allow an inferior strategy to spread at the expense of superior alternatives. However, advantage is not always due to chance, and networks have emerged as an important determinant of organizational behavior. To understand what factors can impact the likelihood that the best alternative will be adopted, this paper asks: how does network structure shape the emergence of social and technological conventions? Prior research has found that highly influential people, or "central" nodes, can be beneficial from the perspective of a single innovation because promotion by central nodes can increase the speed of adoption. In contrast, when considering the competition of multiple strategies, the presence of central nodes may pose a risk, and the resulting "centralized" networks are not guaranteed to favor the optimal strategy. This paper uses agent-based simulation to investigate the effect of network structure on a standard model of convention formation, finding that network centralization increases the speed of convention formation but also decreases the likelihood that the best strategy will become widely adopted. Surprisingly, this finding does not indicate a speed/optimality trade-off: dense networks are both fast and optimal.
arXiv
While the coronavirus spreads around the world, governments are attempting to reduce contagion rates at the expense of negative economic effects. Market expectations have plummeted, foreshadowing the risk of a global economic crisis and mass unemployment. Governments provide huge financial aid programmes to mitigate the expected economic shocks. To achieve higher effectiveness with cyclical and fiscal policy measures, it is key to identify the industries that are most in need of support.
In this study, we introduce a data-mining approach to measure the industry-specific risks related to COVID-19. We examine company risk reports filed to the U.S. Securities and Exchange Commission (SEC). This data set allows for a real-time analysis of risks, revealing that the companies' awareness towards corona-related business risks is ahead of the overall stock market developments by weeks. The risk reports differ substantially between industries, both in magnitude and in nature. Based on natural language processing techniques, we can identify-specific corona-related risk topics and their relevance for different industries. Our approach allows to cluster the industries into distinct risk groups.
The findings of this study are summarised and updated in an online dashboard that tracks the industry-specific risks related to the crisis, as it spreads through the economy. The tracking tool can provide crucial information for policy-makers to effectively target financial support and to mitigate the economic shocks of the current crisis.