# Research articles for the 2020-12-20

arXiv

We provide an axiomatic approach to general premium principles in a probability-free setting that allows for Knightian uncertainty. Every premium principle is the sum of a risk measure, as a generalization of the expected value, and a deviation measure, as a generalization of the variance. One can uniquely identify a maximal risk measure and a minimal deviation measure in such decompositions. We show how previous axiomatizations of premium principles can be embedded into our more general framework. We discuss dual representations of convex premium principles, and study the consistency of premium principles with a financial market in which insurance contracts are traded.

arXiv

In this research work, an explicit Runge-Kutta-Fehlberg time integration with a fourth-order compact finite difference scheme in space is employed for solving the regime-switching pricing model. First, we recast the free boundary problem into a system of nonlinear partial differential equations with a multi-fixed domain. We further introduce a transformation based on the square root function with a fixed free boundary from which a high order analytical approximation is obtained for computing the derivative of the optimal exercise boundary in each regime. The high order analytical approximation is achieved by the method of extrapolation. As such, it enables us to employ fourth-order spatial discretization and an adaptive time integration with Dirichlet boundary conditions for obtaining the numerical solution of the asset option, option Greeks, and the optimal exercise boundary for each regime. In the set of equations, Hermite interpolation with Newton basis is used to estimate the coupled assets options and option Greeks. A numerical experiment is carried out with two- and four-regimes examples and results are compared with the existing methods. The results obtained from the numerical experiment show that the present method provides better performance in terms of computational speed and more accurate solutions with a large step size.

arXiv

We examine how Green governments influence macroeconomic, education, and environmental outcomes. Our empirical strategy exploits that the Fukushima nuclear disaster in Japan gave rise to an unanticipated change in government in the German state Baden-Wuerttemberg in 2011. The incumbent rightwing government was replaced by a leftwing government led by the Green party. We use the synthetic control method to select control states against which Baden-Wuerttemberg's outcomes can be compared. The results do not suggest that the Green government influenced macroeconomic outcomes. The Green government implemented education policies that caused comprehensive schools to become larger. We find no evidence that the Green government influenced CO2 emissions, particulate matter emissions, or increased energy usage from renewable energies overall. An intriguing result is that the share of wind power usage decreased relative to the estimated counterfactual. Intra-ecological conflicts and realities in public office are likely to have prevented the Green government from implementing drastic policy changes.

arXiv

We study the tails of closing auction return distributions for a sample of liquid European stocks. We use the stochastic call auction model of Derksen et al. (2020a), to derive a relation between tail exponents of limit order placement distributions and tail exponents of the resulting closing auction return distribution and we verify this relation empirically. Counter-intuitively, large closing price fluctuations are typically not caused by large market orders, instead tails become heavier when market orders are removed. The model explains this by the observation that limit orders are submitted so as to counter existing market order imbalance.

arXiv

We show that filling an order with a large number of distinct counterparts incurs additional market impact, as opposed to filling the order with a small number of counterparts. For best execution, therefore, it may be beneficial to opportunistically fill orders with as few counterparts as possible in Large-in-scale (LIS) venues.

This article introduces the concept of concentrated trading, a situation that occurs when a large fraction of buying or selling in a given time period is done by one or a few traders, for example when executing a large order. Using London Stock Exchange data, we show that concentrated trading suffers price impact in addition to impact caused by (smart) order routing. However, when matched with similarly concentrated counterparts on the other side of the market, the impact is greatly reduced. This suggests that exposing an order on LIS venues is expected to result in execution performance improvement.

arXiv

Inspired by a series of remarkable papers in recent years that use Deep Neural Nets to substantially speed up the calibration of pricing models, we investigate the use of Chebyshev Tensors instead of Deep Neural Nets. Given that Chebyshev Tensors can be, under certain circumstances, more efficient than Deep Neural Nets at exploring the input space of the function to be approximated, due to their exponential convergence, the problem of calibration of pricing models seems, a priori, a good case where Chebyshev Tensors can excel.

In this piece of research, we built Chebyshev Tensors, either directly or with the help of the Tensor Extension Algorithms, to tackle the computational bottleneck associated with the calibration of the rough Bergomi volatility model. Results are encouraging as the accuracy of model calibration via Chebyshev Tensors is similar to that when using Deep Neural Nets, but with building efforts that range between 5 and 100 times more efficient in the experiments run. Our tests indicate that when using Chebyshev Tensors, the calibration of the rough Bergomi volatility model is around 40,000 times more efficient than if calibrated via brute-force (using the pricing function).

arXiv

Investors try to predict returns of financial assets to make successful investment. Many quantitative analysts have used machine learning-based methods to find unknown profitable market rules from large amounts of market data. However, there are several challenges in financial markets hindering practical applications of machine learning-based models. First, in financial markets, there is no single model that can consistently make accurate prediction because traders in markets quickly adapt to newly available information. Instead, there are a number of ephemeral and partially correct models called "alpha factors". Second, since financial markets are highly uncertain, ensuring interpretability of prediction models is quite important to make reliable trading strategies. To overcome these challenges, we propose the Trader-Company method, a novel evolutionary model that mimics the roles of a financial institute and traders belonging to it. Our method predicts future stock returns by aggregating suggestions from multiple weak learners called Traders. A Trader holds a collection of simple mathematical formulae, each of which represents a candidate of an alpha factor and would be interpretable for real-world investors. The aggregation algorithm, called a Company, maintains multiple Traders. By randomly generating new Traders and retraining them, Companies can efficiently find financially meaningful formulae whilst avoiding overfitting to a transient state of the market. We show the effectiveness of our method by conducting experiments on real market data.