Research articles for the 2020-06-14
arXiv
We propose a fully data driven approach to calibrate local stochastic volatility (LSV) models, circumventing in particular the ad hoc interpolation of the volatility surface. To achieve this, we parametrize the leverage function by a family of feed forward neural networks and learn their parameters directly from the available market option prices. This should be seen in the context of neural SDEs and (causal) generative adversarial networks: we generate volatility surfaces by specific neural SDEs, whose quality is assessed by quantifying, in an adversarial manner, distances to market prices. The minimization of the calibration functional relies strongly on a variance reduction technique based on hedging and deep hedging, which is interesting in its own right: it allows to calculate model prices and model implied volatilities in an accurate way using only small sets of sample paths. For numerical illustration we implement a SABR-type LSV model and conduct a thorough statistical performance analyis on many samples of implied volatility smiles, showing the accuracy and stability of the method.
arXiv
In constant parameter compartmental models an early onset of herd immunity is at odds with estimates of R values from early stage growth. In a bid to resolve this conundrum we are inspired by de Finetti's Theorem, and we exhibit equivalence classes of meta-population models that are orbits of the symmetric group. We illustrate with a mixture of stochastic SIR models in which growth can be inferred from a classic bond pricing formula of Vasicek. This approach exploits the symmetry of model observables, and then uses convexity adjustments to directly determine the degree of variation that is needed to locate the orbit of nature's model. Convexity adjustments are also useful, and material, for cross-sectional comparison. We consider some stylized population density profiles and derive easy to use rules of thumb for estimating threshold infection level in one region given knowledge of threshold infection in another.
arXiv
We analyze how digital platforms can increase the survival rate of firms during a crisis by providing continuity in access to customers. Using order-level data from Uber Technologies, we study how the COVID-19 pandemic and the ensuing shutdown of businesses in the United States affected independent, small business restaurant supply and demand on the Uber Eats platform. We find evidence that small restaurants experience significant increases in total activity, orders per day, and orders per hour following the closure of the dine-in channel, and that these increases may be due to both demand-side and supply-side shocks. We document an increase in the intensity of competitive effects following the shock, showing that growth in the number of providers on a platform induces both market expansion and heightened inter-provider competition. Our findings underscore the critical role that digital will play in creating business resilience in the post-COVID economy, and provide new managerial insight into how supply-side and demand-side factors shape business performance on a platform.
arXiv
We analyze theoretically the problem of testing for p-hacking based on distributions of p-values across multiple studies. We provide general results for when such distributions have testable restrictions under the null of no p-hacking. We find novel additional testable restrictions for p-values based on t-tests. Analytical characterizations of the distributions of p-values under the null of no p-hacking and the alternative where there is p-hacking allow us to both analyze the power of existing tests and provide new more powerful statistical tests for p-hacking. Results are extended to practical situations where there is publication bias and when reported p-values are rounded. We also show that tests for p-hacking based on distributions of t-statistics can be problematic and may not control size. Our proposed tests are shown to have good properties in Monte Carlo studies and are applied to two datasets of p-values.
arXiv
In contrast to the rapid integration of the world economy, many regional trade agreements (RTAs) have also emerged since the early 1990s. This seeming contradiction has encouraged scholars and policy makers to explore the true effects of RTAs, including both regional and global trade relationships. This paper defines synthesized trade resistance and decomposes it into natural and artificial factors. Here, we separate the influence of geographical distance, economic volume, overall increases in transportation and labor costs and use the expectation maximization algorithm to optimize the parameters and quantify the trade purity indicator, which describes the true global trade environment and relationships among countries. This indicates that although global and most regional trade relations gradually deteriorated during the period 2007-2017, RTAs generate trade relations among members, especially contributing to the relative prosperity of EU and NAFTA countries. In addition, we apply the network to reflect the purity of the trade relations among countries. The effects of RTAs can be analyzed by comparing typical trade unions and trade communities, which are presented using an empirical network structure. This analysis shows that the community structure is quite consistent with some trade unions, and the representative RTAs constitute the core structure of international trade network. However, the role of trade unions has weakened, and multilateral trade liberalization has accelerated in the past decade. This means that more countries have recently tended to expand their trading partners outside of these unions rather than limit their trading activities to RTAs.
arXiv
In this paper, an approximate version of the Barndorff-Nielsen and Shephard model, driven by a Brownian motion and a L\'evy subordinator, is formulated. The first-exit time of the log-return process for this model is analyzed. It is shown that with certain probability, the first-exit time process of the log-return is decomposable into the sum of the first exit time of the Brownian motion with drift, and the first exit time of a L\'evy subordinator with drift. Subsequently, the probability density functions of the first exit time of some specific L\'evy subordinators, connected to stationary, self-decomposable variance processes, are studied. Analytical expressions of the probability density function of the first-exit time of three such L\'evy subordinators are obtained in terms of various special functions. The results are implemented to empirical S\&P 500 dataset.
arXiv
Automated market makers, first popularized by Hanson's logarithmic market scoring rule (or LMSR) for prediction markets, have become important building blocks, called 'primitives,' for decentralized finance. A particularly useful primitive is the ability to measure the price of an asset, a problem often known as the pricing oracle problem. In this paper, we focus on the analysis of a very large class of automated market makers, called constant function market makers (or CFMMs) which includes existing popular market makers such as Uniswap, Balancer, and Curve, whose yearly transaction volume totals to billions of dollars. We give sufficient conditions such that, under fairly general assumptions, agents who interact with these constant function market makers are incentivized to correctly report the price of an asset and that they can do so in a computationally efficient way. We also derive several other useful properties that were previously not known. These include lower bounds on the total value of assets held by CFMMs and lower bounds guaranteeing that no agent can, by any set of trades, drain the reserves of assets held by a given CFMM.
arXiv
This paper studies an infinite horizon optimal consumption problem under exponential utility, together with non-negativity constraint on consumption rate and a reference point to the past consumption peak. The performance is measured by the distance between the consumption rate and a fraction $0\leq\lambda\leq 1$ of the historical consumption maximum. To overcome its path-dependent nature, the consumption running maximum process is chosen as an auxiliary state process that renders the value function two dimensional depending on the wealth variable $x$ and the reference variable $h$. The associated Hamilton-Jacobi-Bellman (HJB) equation is expressed in the piecewise manner across different regions to take into account constraints. By employing the dual transform and smooth-fit principle, the classical solution of the HJB equation is obtained in an analytical form, which in turn provides the feedback optimal investment and consumption. For $0<\lambda<1$, we are able to find four boundary curves $x_1(h)$, $\breve{x}(h)$, $x_2(h)$ and $x_3(h)$ for the wealth level $x$ that are nonlinear functions of $h$ such that the feedback optimal consumption satisfies: (i) $c^*(x,h)=0$ when $x\leq x_1(h)$; (ii) $0<c^*(x,h)<\lambda h$ when $x_1(h)<x<\breve{x}(h)$; (iii) $\lambda h\leq c^*(x,h)<h$ when $\breve{x}(h)\leq x<x_2(h)$; (iv) $c^*(x,h)=h$ but the running maximum process remains flat when $x_2(h)\leq x<x_3(h)$; (v) $c^*(x,h)=h$ and the running maximum process increases when $x=x_3(h)$. Similar conclusions can be made in a simpler fashion for two extreme cases $\lambda=0$ and $\lambda=1$. Numerical examples are also presented to illustrate some theoretical conclusions and financial insights.
arXiv
Approximately half of the global population does not have access to the internet, even though digital access can reduce poverty by revolutionizing economic development opportunities. Due to a lack of data, Mobile Network Operators (MNOs), governments and other digital ecosystem actors struggle to effectively determine if telecommunication investments are viable, especially in greenfield areas where demand is unknown. This leads to a lack of investment in network infrastructure, resulting in a phenomenon commonly referred to as the 'digital divide'. In this paper we present a method that uses publicly available satellite imagery to predict telecoms demand metrics, including cell phone adoption and spending on mobile services, and apply the method to Malawi and Ethiopia. A predictive machine learning approach can capture up to 40% of data variance, compared to existing approaches which only explain up to 20% of the data variance. The method is a starting point for developing more sophisticated predictive models of telecom infrastructure demand using publicly available satellite imagery and image recognition techniques. The evidence produced can help to better inform investment and policy decisions which aim to reduce the digital divide.
arXiv
A market portfolio is a portfolio in which each asset is held at a weight proportional to its market value. A swap portfolio is a portfolio in which each one of a pair of assets is held at a weight proportional to the market value of the other. A reverse-weighted index portfolio is a portfolio in which the weights of the market portfolio are swapped pairwise by rank. Swap portfolios are functionally generated, and in a coherent market they have higher asymptotic growth rates than the market portfolio. Although reverse-weighted portfolios with two or more pairs of assets are not functionally generated, in a market represented by a first-order model with symmetric variances, they will grow faster than the market portfolio. This result is applied to a market of commodity futures, where we show that the reverse price-weighted portfolio substantially outperforms the price-weighted portfolio from 1977-2018.