Research articles for the 2020-12-15

A general framework for a joint calibration of VIX and VXX options
Martino Grasselli,Andrea Mazzoran,Andrea Pallavicini

We analyze the VIX futures market with a focus on the exchange-traded notes written on such contracts, in particular, we investigate the VXX notes tracking the short-end part of the futures term structure. Inspired by recent developments in commodity smile modeling, we present a multi-factor stochastic local-volatility model able to jointly calibrate plain vanilla options both on VIX futures and VXX notes. We discuss numerical results on real market data by highlighting the impact of model parameters on implied volatilities.

A mathematical model of national-level food system sustainability
Conor Goold,Simone Pfuderer,William H. M. James,Nik Lomax,Fiona Smith,Lisa M. Collins

The global food system faces various endogeneous and exogeneous, biotic and abiotic risk factors, including a rising human population, higher population densities, price volatility and climate change. Quantitative models play an important role in understanding food systems' expected responses to shocks and stresses. Here, we present a stylised mathematical model of a national-level food system that incorporates domestic supply of a food commodity, international trade, consumer demand, and food commodity price. We derive a critical compound parameter signalling when domestic supply will become unsustainable and the food system entirely dependent on imports, which results in higher commodity prices, lower consumer demand and lower inventory levels. Using Bayesian estimation, we apply the dynamic food systems model to infer the sustainability of the UK pork industry. We find that the UK pork industry is currently sustainable but because the industry is dependent on imports to meet demand, a decrease in self-sufficiency below 50% (current levels are 60-65%) would lead it close to the critical boundary signalling its collapse. Our model provides a theoretical foundation for future work to determine more complex causal drivers of food system vulnerability.

Application of deep quantum neural networks to finance
Takayuki Sakuma

Use of the deep quantum neural network proposed by Beer et al. (2020) could grant new perspectives on solving numerical problems arising in the field of finance. We discuss this potential in the context of simple experiments such as learning implied volatilites and differential machine proposed by Huge and Savine (2020). The deep quantum neural network is considered to be a promising candidate for developing highly powerful methods in finance.

Early warnings of COVID-19 outbreaks across Europe from social media?
Milena Lopreite,Pietro Panzarasa,Michelangelo Puliga,Massimo Riccaboni

We analyze data from Twitter to uncover early-warning signals of COVID-19 outbreaks in Europe in the winter season 2019-2020, before the first public announcements of local sources of infection were made. We show evidence that unexpected levels of concerns about cases of pneumonia were raised across a number of European countries. Whistleblowing came primarily from the geographical regions that eventually turned out to be the key breeding grounds for infections. These findings point to the urgency of setting up an integrated digital surveillance system in which social media can help geo-localize chains of contagion that would otherwise proliferate almost completely undetected.

Endogenous inverse demand functions
Maxim Bichuch,Zachary Feinstein

In this work we present an equilibrium formulation for price impacts. This is motivated by the Buhlmann equilibrium in which assets are sold into a system of market participants and can be viewed as a generalization of the Esscher premium. Existence and uniqueness of clearing prices for the liquidation of a portfolio are studied. We also investigate other desired portfolio properties including monotonicity and concavity. Price per portfolio unit sold is also calculated. In special cases, we study price impacts generated by market participants who follow the exponential utility and power utility.

Ergodicity-breaking reveals time optimal decision making in humans
David Meder,Finn Rabe,Tobias Morville,Kristoffer H. Madsen,Magnus T. Koudahl,Ray J. Dolan,Hartwig R. Siebner,Oliver J. Hulme

Ergodicity describes an equivalence between the expectation value and the time average of observables. Applied to human behaviour, ergodic theories of decision-making reveal how individuals should tolerate risk in different environments. To optimise wealth over time, agents should adapt their utility function according to the dynamical setting they face. Linear utility is optimal for additive dynamics, whereas logarithmic utility is optimal for multiplicative dynamics. Whether humans approximate time optimal behavior across different dynamics is unknown. Here we compare the effects of additive versus multiplicative gamble dynamics on risky choice. We show that utility functions are modulated by gamble dynamics in ways not explained by prevailing decision theory. Instead, as predicted by time optimality, risk aversion increases under multiplicative dynamics, distributing close to the values that maximise the time average growth of wealth. We suggest that our findings motivate a need for explicitly grounding theories of decision-making on ergodic considerations.

Generalized Feynman-Kac Formula under volatility uncertainty
Bahar Akthari,Francesca Biagini,Andrea Mazzon,Katharina Oberpriller

In this paper we provide a generalization of the Feynmac-Kac formula under volatility uncertainty in presence of discounting. We state our result under different hypothesis with respect to the derivation given by Hu, Ji, Peng and Song (Comparison theorem, Feynman-Kac formula and Girsanov transformation for BSDEs driven by G-Brownian motion, Stochastic Processes and their Application, 124 (2)), where the Lipschitz continuity of some functionals is assumed which is not necessarily satisfied in our setting. In particular, we obtain the $G$-conditional expectation of a discounted payoff as the limit of $C^{1,2}$ solutions of some regularized PDEs, for different kinds of convergence. In applications, this permits to approximate such a sublinear expectation in a computationally efficient way.

Inverse Gaussian quadrature and finite normal-mixture approximation of the generalized hyperbolic distribution
Jaehyuk Choi,Yeda Du,Qingshuo Song

In this study, a numerical quadrature for the generalized inverse Gaussian distribution is derived from the Gauss-Hermite quadrature by exploiting its relationship with the normal distribution. The proposed quadrature is not Gaussian, but it exactly integrates the polynomials of both positive and negative orders. Using the quadrature, the generalized hyperbolic distribution is efficiently approximated as a finite normal variance-mean mixture. Therefore, the expectations under the distribution, such as cumulative distribution function and European option price, are accurately computed as weighted sums of those under normal distributions. The generalized hyperbolic random variates are also sampled in a straightforward manner. The accuracy of the methods is illustrated with numerical examples.

Kicking You When You're Already Down: The Multipronged Impact of Austerity on Crime
Corrado Giulietti,Brendon McConnell

The UK Welfare Reform Act (2012) imposed a series of welfare cuts, which disproportionately impacted ex-ante poorer areas. In this paper we consider the impact of these austerity measures on two different but complementary elements of crime - the crime rate and the less-studied concentration of crime - over the period 2011-2015 in England and Wales, and document four new facts. First, areas more exposed to the welfare reforms experience increased levels of crime, an effect driven by a rise in violent crime. Second, crime becomes more concentrated within an area due to the welfare reforms, both violent and property crime. Third, it is ex-ante more deprived neighborhoods that bear the brunt of the crime increases over this period. Fourth, we find no evidence that the welfare reforms increased recidivism, hence the changes in crime we find are likely driven by new criminals. Combining these results, we document unambiguous evidence of a negative spillover of the welfare reforms at the heart of the UK government's austerity program on social welfare, which reinforced the direct inequality-enhancing effect of this program. More deprived districts are more exposed to the welfare reforms, and it is these districts that then experience the further negative consequences of the reforms via increased crime. Our findings underscore the importance of considering both multiple dimensions of crime as well as considering different levels of spatial aggregation of crime data. Given that it is violent crime that responds to the (economically-based) welfare cuts, our work also highlights the need to develop better economic models of non-rational crime.

Market-consistent pricing with acceptable risk
Maria Arduca,Cosimo Munari

We study the range of prices at which a rational agent should contemplate transacting a financial contract outside a given securities market. Trading is subject to nonproportional transaction costs and portfolio constraints and full replication by way of market instruments is not always possible. Rationality is defined in terms of consistency with market prices and acceptable risk thresholds. We obtain a direct and a dual description of market-consistent prices with acceptable risk based on superreplication prices and pricing densities. The dual characterization requires an appropriate extension of the classical Fundamental Theorem of Asset Pricing where the role of arbitrage opportunities is played by acceptable deals, i.e., costless investment opportunities with acceptable risk-reward tradeoff. In particular, we highlight the importance of scalable acceptable deals, i.e., investment opportunities that are acceptable deals regardless of their volume. Our results provide a systematic treatment of and new insights into the theory of good deal pricing in a static setting.

Preventing COVID-19 Fatalities: State versus Federal Policies
Jean-Paul Renne,Guillaume Roussellet,Gustavo Schwenkler

Are COVID-19 fatalities large when a federal government does not enforce containment policies and instead allow states to implement their own policies? We answer this question by developing a stochastic extension of a SIRD epidemiological model for a country composed of multiple states. Our model allows for interstate mobility. We consider three policies: mask mandates, stay-at-home orders, and interstate travel bans. We fit our model to daily U.S. state-level COVID-19 death counts and exploit our estimates to produce various policy counterfactuals. While the restrictions imposed by some states inhibited a significant number of virus deaths, we find that more than two-thirds of U.S. COVID-19 deaths could have been prevented by late November 2020 had the federal government enforced federal mandates as early as some of the earliest states did. Our results quantify the benefits of early actions by a federal government for the containment of a pandemic.

Revenue allocation in Formula One: a pairwise comparison approach
Dóra Gréta Petróczy,László Csató

A model is proposed to allocate Formula One World Championship prize money among the constructors. The methodology is based on pairwise comparison matrices, allows for the use of any weighting method, and makes possible to tune the level of inequality. We introduce an axiom called scale invariance, which requires the ranking of the teams to be independent of the parameter controlling inequality. The eigenvector method is revealed to violate this condition in our dataset, while the row geometric mean method always satisfies it. The revenue allocation is not influenced by the arbitrary valuation given to the race prizes in the official points scoring system of Formula One and takes the intensity of pairwise preferences into account, contrary to the standard Condorcet method. Our approach can be used to share revenues among groups when group members are ranked several times.

Tensoring volatility calibration
Mariano Zeron,Ignacio Ruiz

Inspired by a series of remarkable papers in recent years that use Deep Neural Nets to substantially speed up the calibration of pricing models, we investigate the use of Chebyshev Tensors instead of Deep Neural Nets. Given that Chebyshev Tensors can be, under certain circumstances, more efficient than Deep Neural Nets at exploring the input space of the function to be approximated, due to their exponential convergence, the problem of calibration of pricing models seems, a priori, a good case where Chebyshev Tensors can be used.

In this piece of research, we built Chebyshev Tensors, either directly or with the help of the Tensor Extension Algorithms, to tackle the computational bottleneck associated with the calibration of the rough Bergomi volatility model. Results are encouraging as the accuracy of model calibration via Chebyshev Tensors is similar to that when using Deep Neural Nets, but with building efforts that range between 5 and 100 times more efficient in the experiments run. Our tests indicate that when using Chebyshev Tensors, the calibration of the rough Bergomi volatility model is around 40,000 times more efficient than if calibrated via brute-force (using the pricing function).

When does the tail wag the dog? Curvature and market making
Guillermo Angeris,Alex Evans,Tarun Chitra

Liquidity and trading activity on constant function market makers (CFMMs) such as Uniswap, Curve, and Balancer has grown significantly in the second half of 2020. Much of the growth of these protocols has been driven by incentivized pools or 'yield farming', which reward participants in crypto assets for providing liquidity to CFMMs. As a result, CFMMs and associated protocols, which were historically very small markets, now constitute the most liquid trading venues for a large number of crypto assets. But what does it mean for a CFMM to be the most liquid market? In this paper, we propose a basic definition of price sensitivity and liquidity. We show that this definition is tightly related to the curvature of a CFMM's trading function and can be used to explain a number of heuristic results. For example, we show that low-curvature markets are good for coins whose market value is approximately fixed and that high-curvature markets are better for liquidity providers when traders have an informational edge. Additionally, the results can also be used to model interacting markets and explain the rise of incentivized liquidity provision, also known as 'yield farming.'