Research articles for the 2019-08-13

A zero interest rate Black-Derman-Toy model
Grzegorz Krzyżanowski,Ernesto Mordecki,Andrés Sosa
arXiv

We propose a modification of the classical Black-Derman-Toy (BDT) interest rate tree model, which includes the possibility of a jump with small probability at each step to a practically zero interest rate. The corresponding BDT algorithms are consequently modified to calibrate the tree containing the zero interest rate scenarios. This modification is motivated by the recent 2008-2009 crisis in the United States and it quantifies the risk of a future crises in bond prices and derivatives. The proposed model is useful to price derivatives. This exercise also provides a tool to calibrate the probability of this event. A comparison of option prices and implied volatilities on US Treasury bonds computed with both the proposed and the classical tree model is provided, in six different scenarios along the different periods comprising the years 2002-2017.



An instantaneous market volatility estimation
Oleh Danyliv,Bruce Bland
arXiv

Working on different aspects of algorithmic trading we empirically discovered a new market invariant. It links together the volatility of the instrument with its traded volume, the average spread and the volume in the order book. The invariant has been tested on different markets and different asset classes. In all cases we did not find significant violation of the invariant. The formula for the invariant was used for the volatility estimation, which we called the instantaneous volatility. Quantitative comparison showed that it reproduces realised volatility better than one-day-ahead GARCH(1,1) prediction. Because of the short-term prediction nature, the instantaneous volatility could be used by algo developers, volatility traders and other market professionals.



Critical Decisions for Asset Allocation via Penalized Quantile Regression
Giovanni Bonaccolto
arXiv

We extend the analysis of investment strategies derived from penalized quantile regression models, introducing alternative approaches to improve state\textendash of\textendash art asset allocation rules. First, we use a post\textendash penalization procedure to deal with overshrinking and concentration issues. Second, we investigate whether and to what extent the performance changes when moving from convex to nonconvex penalty functions. Third, we compare different methods to select the optimal tuning parameter which controls the intensity of the penalization. Empirical analyses on real\textendash world data show that these alternative methods outperform the simple LASSO. This evidence becomes stronger when focusing on the extreme risk, which is strictly linked to the quantile regression method.



Forecast Encompassing Tests for the Expected Shortfall
Timo Dimitriadis,Julie Schnaitmann
arXiv

In this paper, we introduce new forecast encompassing tests for the risk measure Expected Shortfall (ES). Forecasting and forecast evaluation techniques for the ES are rapidly gaining attention through the recently introduced Basel III Accords, which stipulate the use of the ES as primary market risk measure for the international banking regulations. Encompassing tests generally rely on the existence of strictly consistent loss functions for the functionals under consideration, which do not exist for the ES. However, our encompassing tests are based on recently introduced loss functions and an associated regression framework which considers the ES jointly with the corresponding Value at Risk (VaR). This setup facilitates several testing specifications which allow for both, joint tests for the ES and VaR and stand-alone tests for the ES. We present asymptotic theory for our encompassing tests and verify their finite sample properties through various simulation setups. In an empirical application, we utilize the encompassing tests in order to demonstrate the superiority of forecast combination methods for the ES for the IBM stock.



Geometrically Convergent Simulation of the Extrema of L\'{e}vy Processes
Jorge González Cázares,Aleksandar Mijatović,Gerónimo Uribe Bravo
arXiv

We develop a novel approximate simulation algorithm for the joint law of the position, the running supremum and the time of the supremum of a general L\'evy process at an arbitrary finite time. We identify the law of the error in simple terms. We prove that the error decays geometrically in $L^p$ (for any $p\geq 1$) as a function of the computational cost, in contrast with the polynomial decay for the approximations available in the literature. We establish a central limit theorem and construct non-asymptotic and asymptotic confidence intervals for the corresponding Monte Carlo estimator. We prove that the multilevel Monte Carlo estimator has optimal computational complexity (i.e. of order $\epsilon^{-2}$ if the mean squared error is at most $\epsilon^2$) for locally Lipschitz and barrier-type functionals of the triplet and develop an unbiased version of the estimator. We illustrate the performance of the algorithm with numerical examples.



Random walk model from the point of view of algorithmic trading
Oleh Danyliv,Bruce Bland,Alexandre Argenson
arXiv

Despite the fact that an intraday market price distribution is not normal, the random walk model of price behaviour is as important for the understanding of basic principles of the market as the pendulum model is a starting point of many fundamental theories in physics. This model is a good zero order approximation for liquid fast moving markets where the queue position is less important than the price action. In this paper we present an exact solution for the cost of the static passive slice execution. It is shown, that if a price has a random walk behaviour, there is no optimal limit level for an order execution: all levels have the same execution cost as an immediate aggressive execution at the beginning of the slice. Additionally the estimations for the risk of a limit order as well as the probability of a limit order execution as functions of the slice time and standard deviation of the price are derived.



Total positivity and the classification of term structure shapes in the two-factor Vasicek model
Martin Keller-Ressel
arXiv

Using methods from the theory of total positivity, we provide a full classification of attainable term structure shapes in the two-factor Vasicek model. In particular, we show that the shapes normal, inverse, humped, dipped and hump-dip are always attainable. In certain parameter regimes up to four additional shapes can be produced. Our results show that the correlation and the difference in mean-reversion speeds of the two factor processes play a key role in determining the scope of attainable shapes. The mathematical tools from total positivity can likely be applied to higher-dimensional generalizations of the Vasicek model and to other interest rate models as well.



Wasserstein Index Generation Model: Automatic Generation of Time-series Index with Application to Economic Policy Uncertainty
Fangzhou Xie
arXiv

I propose a novel method, called the Wasserstein Index Generation model (WIG), to generate public sentiment index automatically. It can be performed off-the-shelf and is especially good at detecting sudden sentiment spikes. To test the model's effectiveness, an application to generate Economic Policy Uncertainty (EPU) index is showcased.