# Research articles for the 2021-07-05

arXiv

The aim of this short note is to establish a limit theorem for the optimal trading strategies in the setup of the utility maximization problem with proportional transaction costs. This limit theorem resolves the open question from [4]. The main idea of our proof is to establish a uniqueness result for the optimal strategy. Surprisingly, up to date, there are no results related to the uniqueness of the optimal trading strategy. The proof of the uniqueness is heavily based on the dual approach which was developed recently in [6,7,8].

arXiv

We analyze the limiting behavior of the risk premium associated with the Pareto optimal risk sharing contract in an infinitely expanding pool of risks under a general class of law-invariant risk measures encompassing rank-dependent utility preferences. We show that the corresponding convergence rate is typically only $n^{1/2}$ instead of the conventional $n$, with $n$ the multiplicity of risks in the pool, depending upon the precise risk preferences.

arXiv

This discussion applies quantitative finance methods and economic arguments to cryptocurrencies in general and bitcoin in particular -- as there are about $10,000$ cryptocurrencies, we focus (unless otherwise specified) on the most discussed crypto of those that claim to hew to the original protocol (Nakamoto 2009) and the one with, by far, the largest market capitalization.

In its current version, in spite of the hype, bitcoin failed to satisfy the notion of "currency without government" (it proved to not even be a currency at all), can be neither a short nor long term store of value (its expected value is no higher than $0$), cannot operate as a reliable inflation hedge, and, worst of all, does not constitute, not even remotely, a safe haven for one's investments, a shield against government tyranny, or a tail protection vehicle for catastrophic episodes.

Furthermore, bitcoin promoters appear to conflate the success of a payment mechanism (as a decentralized mode of exchange), which so far has failed, with the speculative variations in the price of a zero-sum maximally fragile asset with massive negative externalities.

Going through monetary history, we show how a true numeraire must be one of minimum variance with respect to an arbitrary basket of goods and services, how gold and silver lost their inflation hedge status during the Hunt brothers squeeze in the late 1970s and what would be required from a true inflation hedged store of value.

arXiv

A non-linear shrinkage estimator of large-dimensional covariance matrices is derived in a setting of auto-correlated samples, thus generalizing the recent formula by Ledoit-P\'{e}ch\'{e}. The calculation is facilitated by random matrix theory. The result is turned into an efficient algorithm, and an associated Python library, shrinkage, with help of Ledoit-Wolf kernel estimation technique. An example of exponentially-decaying auto-correlations is presented.

arXiv

The matching literature often recommends market centralization under the assumption that agents know their own preferences and that their preferences are fixed. We find counterevidence to this assumption in a quasi-experiment. In Germany's university admissions, a clearinghouse implements the early stages of the Gale-Shapley algorithm in real time. We show that early offers made in this decentralized phase, although not more desirable, are accepted more often than later ones. These results, together with survey evidence and a theoretical model, are consistent with students' costly learning about universities. We propose a hybrid mechanism to combine the advantages of decentralization and centralization.

arXiv

We study the expression rates of deep neural networks (DNNs for short) for option prices written on baskets of $d$ risky assets, whose log-returns are modelled by a multivariate L\'evy process with general correlation structure of jumps. We establish sufficient conditions on the characteristic triplet of the L\'evy process $X$ that ensure $\varepsilon$ error of DNN expressed option prices with DNNs of size that grows polynomially with respect to $\mathcal{O}(\varepsilon^{-1})$, and with constants implied in $\mathcal{O}(\cdot)$ which grow polynomially with respect $d$, thereby overcoming the curse of dimensionality and justifying the use of DNNs in financial modelling of large baskets in markets with jumps.

In addition, we exploit parabolic smoothing of Kolmogorov partial integrodifferential equations for certain multivariate L\'evy processes to present alternative architectures of ReLU DNNs that provide $\varepsilon$ expression error in DNN size $\mathcal{O}(|\log(\varepsilon)|^a)$ with exponent $a \sim d$, however, with constants implied in $\mathcal{O}(\cdot)$ growing exponentially with respect to $d$. Under stronger, dimension-uniform non-degeneracy conditions on the L\'evy symbol, we obtain algebraic expression rates of option prices in exponential L\'evy models which are free from the curse of dimensionality. In this case the ReLU DNN expression rates of prices depend on certain sparsity conditions on the characteristic L\'evy triplet. We indicate several consequences and possible extensions of the present results.

arXiv

The quadratic rough Heston model provides a natural way to encode Zumbach effect in the rough volatility paradigm. We apply multi-factor approximation and use deep learning methods to build an efficient calibration procedure for this model. We show that the model is able to reproduce very well both SPX and VIX implied volatilities. We typically obtain VIX option prices within the bid-ask spread and an excellent fit of the SPX at-the-money skew. Moreover, we also explain how to use the trained neural networks for hedging with instantaneous computation of hedging quantities.

arXiv

We have embedded the classical theory of stochastic finance into a differential geometric framework called Geometric Arbitrage Theory and show that it is possible to:

--Write arbitrage as curvature of a principal fibre bundle.

--Parameterize arbitrage strategies by its holonomy.

--Give the Fundamental Theorem of Asset Pricing a differential homotopic characterization.

--Characterize Geometric Arbitrage Theory by five principles and show they they are consistent with the classical theory of stochastic finance.

--Derive for a closed market the equilibrium solution for market portfolio and dynamics in the cases where:

-->Arbitrage is allowed but minimized.

-->Arbitrage is not allowed.

--Prove that the no-free-lunch-with-vanishing-risk condition implies the zero curvature condition.

arXiv

Despite half a century of research, there is still no general agreement about the optimal approach to build a robust multi-period portfolio. We address this question by proposing the detrended cluster entropy approach to estimate the portfolio weights of high-frequency market indices. The information measure produces reliable estimates of the portfolio weights gathered from the real-world market data at varying temporal horizons. The portfolio exhibits a high level of diversity, robustness and stability as it is not affected by the drawbacks of traditional mean-variance approaches.

arXiv

In this paper we propose a theoretical model including a susceptible-infected-recovered-dead (SIRD) model of epidemic in a dynamic macroeconomic general equilibrium framework with agents' mobility. The latter affect both their income (and consumption) and their probability of infecting and of being infected. Strategic complementarities among individual mobility choices drive the evolution of aggregate economic activity, while infection externalities caused by individual mobility affect disease diffusion. Rational expectations of forward looking agents on the dynamics of aggregate mobility and epidemic determine individual mobility decisions. The model allows to evaluate alternative scenarios of mobility restrictions, especially policies dependent on the state of epidemic. We prove the existence of an equilibrium and provide a recursive construction method for finding equilibrium(a), which also guides our numerical investigations. We calibrate the model by using Italian experience on COVID-19 epidemic in the period February 2020 - May 2021. We discuss how our economic SIRD (ESIRD) model produces a substantially different dynamics of economy and epidemic with respect to a SIRD model with constant agents' mobility. Finally, by numerical explorations we illustrate how the model can be used to design an efficient policy of state-of-epidemic-dependent mobility restrictions, which mitigates the epidemic peaks stressing health system, and allows for trading-off the economic losses due to reduced mobility with the lower death rate due to the lower spread of epidemic.

arXiv

We provide a survey of recent results on model calibration by Optimal Transport. We present the general framework and then discuss the calibration of local, and local-stochastic, volatility models to European options, the joint VIX/SPX calibration problem as well as calibration to some path-dependent options. We explain the numerical algorithms and present examples both on synthetic and market data.

arXiv

We study the connection between risk aversion, number of consumers and uniqueness of equilibrium. We consider an economy with two goods and $c$ impatience types, where each type has additive separable preferences with HARA Bernoulli utility function, $u_H(x):=\frac{\gamma}{1-\gamma}(b+\frac{a}{\gamma}x)^{1-\gamma}$. We show that if $\gamma\in (1, \frac{c}{c-1}]$, the equilibrium is unique. Moreover, the methods used, involving Newton's symmetric polynomials and Descartes' rule of signs, enable us to offer new sufficient conditions for uniqueness in a closed-form expression highlighting the role played by endowments, patience and specific HARA parameters. Finally new necessary and sufficient conditions in ensuring uniqueness are derived for the particular case of CRRA Bernoulli utility functions with $\gamma =3$.

arXiv

Inconsistency in pairwise comparison judgements is often perceived as an unwanted phenomenon and researchers have proposed a number of techniques to either reduce it or to correct it. We take a viewpoint that this inconsistency unleashes different mindsets of the decision maker(s) that should be taken into account when generating recommendations as decision support. With this aim we consider the spanning trees analysis which is a recently emerging idea for use with the pairwise comparison approach that represents the plurality of mindsets (in terms of a plurality of vectors corresponding to different spanning trees). Until now, the multiplicity of the vectors supplied by the spanning trees approach have been amalgamated into a single preference vector, losing the information about the plurality of mindsets. To preserve this information, we propose a novel methodology taking an approach similar to Stochastic Multi-criteria Acceptability Analysis. Considering all the rankings of alternatives corresponding to the different mindsets, our methodology gives the probability that an alternative attains a given ranking position as well as the probability that an alternative is preferred to another one. Since the exponential number of spanning trees makes their enumeration prohibitive, we propose computing approximate probabilities using statistical sampling of the spanning trees. Our approach is also appealing because it can be applied also to incomplete sets of pairwise comparisons. We demonstrate its usefulness with a didactic example as well as with an application to a real-life case of selecting a Telecom backbone infrastructure for rural areas.

arXiv

The common belief about the growing medium of livestreaming is that its value lies in its "live" component. In this paper, we leverage data from a large livestreaming platform to examine this belief. We are able to do this as this platform also allows viewers to purchase the recorded version of the livestream. We summarize the value of livestreaming content by estimating how demand responds to price before, on the day of, and after the livestream. We do this by proposing a generalized Orthogonal Random Forest framework. This framework allows us to estimate heterogeneous treatment effects in the presence of high-dimensional confounders whose relationships with the treatment policy (i.e., price) are complex but partially known. We find significant dynamics in the price elasticity of demand over the temporal distance to the scheduled livestreaming day and after. Specifically, demand gradually becomes less price sensitive over time to the livestreaming day and is inelastic on the livestreaming day. Over the post-livestream period, demand is still sensitive to price, but much less than the pre-livestream period. This indicates that the vlaue of livestreaming persists beyond the live component. Finally, we provide suggestive evidence for the likely mechanisms driving our results. These are quality uncertainty reduction for the patterns pre- and post-livestream and the potential of real-time interaction with the creator on the day of the livestream.

arXiv

This study examines patterns of regionalisation in the International Trade Network (ITN). This study makes use of Gould Fernandez brokerage to examine the roles countries play in the ITN linking different regional partitions. An examination of three ITNs is provided for three networks with varying levels of technological content, representing trade in high tech, medium tech, and low-tech goods. Simulated network data, based on an advanced network model controlling for degree centralisation and clustering patterns, is compared to the observed data to examine whether the roles countries play within and between regions are result of centralisation and clustering patterns. The findings indicate that the roles countries play between and within regions is a result of centralisation and clustering patterns; indicating a need to examine the presence of hubs when investigating regionalisation and globalisation patterns in the modern global economy.

arXiv

In this work, we address time-series forecasting as a computer vision task. We capture input data as an image and train a model to produce the subsequent image. This approach results in predicting distributions as opposed to pointwise values. To assess the robustness and quality of our approach, we examine various datasets and multiple evaluation metrics. Our experiments show that our forecasting tool is effective for cyclic data but somewhat less for irregular data such as stock prices. Importantly, when using image-based evaluation metrics, we find our method to outperform various baselines, including ARIMA, and a numerical variation of our deep learning approach.