# Research articles for the 2019-08-25

arXiv

In this paper we consider the problem of minimising drawdown in a portfolio of financial assets. Here drawdown represents the relative opportunity cost of the single best missed trading opportunity over a specified time period. We formulate the problem (minimising average drawdown, maximum drawdown, or a weighted combination of the two) as a nonlinear program and show how it can be partially linearised by replacing one of the nonlinear constraints by equivalent linear constraints.

Computational results are presented (generated using the nonlinear solver SCIP) for three test instances drawn from the EURO STOXX 50, the FTSE 100 and the S&P 500 with daily price data over the period 2010-2016. We present results for long-only drawdown portfolios as well as results for portfolios with both long and short positions. These indicate that (on average) our minimal drawdown portfolios dominate the market indices in terms of return, Sharpe ratio, maximum drawdown and average drawdown over the (approximately 1800 trading day) out-of-sample period.

arXiv

We present a neural network based calibration method that performs the calibration task within a few milliseconds for the full implied volatility surface. The framework is consistently applicable throughout a range of volatility models -including the rough volatility family- and a range of derivative contracts. The aim of neural networks in this work is an off-line approximation of complex pricing functions, which are difficult to represent or time-consuming to evaluate by other means. We highlight how this perspective opens new horizons for quantitative modelling: The calibration bottleneck posed by a slow pricing of derivative contracts is lifted. This brings several numerical pricers and model families (such as rough volatility models) within the scope of applicability in industry practice. The form in which information from available data is extracted and stored influences network performance: This approach is inspired by representing the implied volatility and option prices as a collection of pixels. In a number of applications we demonstrate the prowess of this modelling approach regarding accuracy, speed, robustness and generality and also its potentials towards model recognition.

arXiv

This paper extends the core results of discrete time infinite horizon dynamic programming theory to the case of state-dependent discounting. The traditional constant-discount condition requires that the discount factor of the controller is strictly less than one. Here we replace the constant factor with a discount factor process and require, in essence, that the process is strictly less than one on average in the long run. We prove that, under this condition, the standard optimality results can be recovered, including Bellman's principle of optimality, convergence of value function iteration and convergence of policy function iteration. We also show that the condition cannot be weakened in many standard settings. The dynamic programming framework considered in the paper is general enough to contain features such as recursive preferences. Several applications are discussed.

arXiv

Several systematic studies have suggested that a large fraction of published research is not reproducible. One probable reason for low reproducibility is insufficient sample size, resulting in low power and low positive predictive value. It has been suggested that insufficient sample-size choice is driven by a combination of scientific competition and 'positive publication bias'. Here we formalize this intuition in a simple model, in which scientists choose economically rational sample sizes, balancing the cost of experimentation with income from publication. Specifically, assuming that a scientist's income derives only from 'positive' findings (positive publication bias) and that individual samples cost a fixed amount, allows to leverage basic statistical formulas into an economic optimality prediction. We find that if effects have i) low base probability, ii) small effect size or iii) low grant income per publication, then the rational (economically optimal) sample size is small. Furthermore, for plausible distributions of these parameters we find a robust emergence of a bimodal distribution of obtained statistical power and low overall reproducibility rates, matching empirical findings. Overall, the model describes a simple mechanism explaining both the prevalence and the persistence of small sample sizes. It suggests economic rationality, or economic pressures, as a principal driver of irreproducibility.

arXiv

In this study, we consider research and development investment by the government. Our study is motivated by the bias in the budget allocation owing to the competitive funding system. In our model, each researcher presents research plans and expenses, and the government selects a research plan in two periods---before and after the government knows its favorite plan---and spends funds on the adopted program in each period. We demonstrate that, in a subgame perfect equilibrium, the government adopts equally as many active plans as possible. In an equilibrium, the selected plans are distributed proportionally. Thus, the investment in research projects is symmetric and unbiased. Our results imply that equally widespread expenditure across all research fields is better than the selection of and concentration in some specific fields.

arXiv

Several studies of the Job Corps tend to nd more positive earnings effects for males than for females. This effect heterogeneity favouring males contrasts with the results of the majority of other training programmes' evaluations. Applying the translated quantile approach of Bitler, Hoynes, and Domina (2014), I investigate a potential mechanism behind the surprising findings for the Job Corps. My results provide suggestive evidence that the effect of heterogeneity by gender operates through existing gender earnings inequality rather than Job Corps trainability differences.

arXiv

Techniques from deep learning play a more and more important role for the important task of calibration of financial models. The pioneering paper by Hernandez [Risk, 2017] was a catalyst for resurfacing interest in research in this area. In this paper we advocate an alternative (two-step) approach using deep learning techniques solely to learn the pricing map -- from model parameters to prices or implied volatilities -- rather than directly the calibrated model parameters as a function of observed market data. Having a fast and accurate neural-network-based approximating pricing map (first step), we can then (second step) use traditional model calibration algorithms. In this work we showcase a direct comparison of different potential approaches to the learning stage and present algorithms that provide a suffcient accuracy for practical use. We provide a first neural network-based calibration method for rough volatility models for which calibration can be done on the y. We demonstrate the method via a hands-on calibration engine on the rough Bergomi model, for which classical calibration techniques are diffcult to apply due to the high cost of all known numerical pricing methods. Furthermore, we display and compare different types of sampling and training methods and elaborate on their advantages under different objectives. As a further application we use the fast pricing method for a Bayesian analysis of the calibrated model.

arXiv

This article presents a set of tools for the modeling of a spatial allocation problem in a large geographic market and gives examples of applications. In our settings, the market is described by a network that maps the cost of travel between each pair of adjacent locations. Two types of agents are located at the nodes of this network. The buyers choose the most competitive sellers depending on their prices and the cost to reach them. Their utility is assumed additive in both these quantities. Each seller, taking as given other sellers prices, sets her own price to have a demand equal to the one we observed. We give a linear programming formulation for the equilibrium conditions. After formally introducing our model we apply it on two examples: prices offered by petrol stations and quality of services provided by maternity wards. These examples illustrate the applicability of our model to aggregate demand, rank prices and estimate cost structure over the network. We insist on the possibility of applications to large scale data sets using modern linear programming solvers such as Gurobi. In addition to this paper we released a R toolbox to implement our results and an online tutorial (this http URL)

arXiv

The continuous-time version of Kyle's (1985) model is studied, in which market makers are not fiduciaries. They have some market power which they utilize to set the price to their advantage, resulting in positive expected profits. This has several implications for the equilibrium, the most important being that by setting a modest fee conditional of the order flow, the market maker is able to obtain a profit of the order of magnitude, and even better than, a perfectly informed insider. Our model also indicates why speculative prices are more volatile than predicted by fundamentals.