# Research articles for the 2020-07-19

arXiv

We introduce and study the main properties of a class of convex risk measures that refine Expected Shortfall by simultaneously controlling the expected losses associated with different portions of the tail distribution. The corresponding adjusted Expected Shortfalls quantify risk as the minimum amount of capital that has to be raised and injected into a financial position $X$ to ensure that Expected Shortfall $ES_p(X)$ does not exceed a pre-specified threshold $g(p)$ for every probability level $p\in[0,1]$. Through the choice of the benchmark risk profile $g$ one can tailor the risk assessment to the specific application of interest. We devote special attention to the study of risk profiles defined by the Expected Shortfall of a benchmark random loss, in which case our risk measures are intimately linked to second-order stochastic dominance.

arXiv

Genetic programming (GP) is the state-of-the-art in financial automated feature construction task. It employs reverse polish expression to represent features and then conducts the evolution process. However, with the development of deep learning, more powerful feature extraction tools are available. This paper proposes Alpha Discovery Neural Network (ADNN), a tailored neural network structure which can automatically construct diversified financial technical indicators based on prior knowledge. We mainly made three contributions. First, we use domain knowledge in quantitative trading to design the sampling rules and object function. Second, pre-training and model pruning has been used to replace genetic programming, because it can conduct more efficient evolution process. Third, the feature extractors in ADNN can be replaced by different feature extractors and produce different functions. The experiment results show that ADNN can construct more informative and diversified features than GP, which can effectively enriches the current factor pool. The fully-connected network and recurrent network are better at extracting information from the financial time series than the convolution neural network. In real practice, features constructed by ADNN can always improve multi-factor strategies' revenue, sharpe ratio, and max draw-down, compared with the investment strategies without these factors.

SSRN

The purpose of this article is the presentation of a novel and unconventional algorithm for bankruptcy risk management in banking technologies catered towards lending to legal entities (enterprises and companies). The challenges of assessing risk in this area primarily relate to the reduction of type I and type II errors when making decisions on the terms of lending (i.e. loan amounts and repayment parameters) on the ostensibly objective basis of a borrowerâ€™s creditworthiness assessment.As such, it is necessary to use a unified procedure to select appropriate economic indicators for any bankruptcy model in order to reduce the high degree of uncertainty and noisiness of publicly available databases, and to take into consideration the specific character of knowledge-intensive, high-tech and â€œgreenâ€ manufacturing. In order to approach this challenge, a mix of various methods is presented in this article, including credit scoring, neural simulation, a fuzzy model description, fuzzy inference rules, and a fuzzy Pospelov scale.The research results are as follows: the authors have developed an unconventional algorithm for diagnosing corporate bankruptcy stages. This algorithm is based on the application of a system-wide law relating to decreases in integrated system entropy, contrasted with the sum of entropies of the relevant collated subsystems. This algorithm has been tested on a series of experimental observations of 30 agricultural enterprises in the Sterlitamak District of the Republic of Bashkortostan. We have thusly assessed the financial condition of borrowing companies, while controlling for the probability of a wide range of indicators.Using this algorithm, the authors decided not to apply the rigorous requirements of the classical â€˜least squaresâ€™ method used in regression analyses. A switch to a neural simulation approach in this algorithm necessitated an evaluation of the adequacy of the obtained model on the basis of a Bayesian approach. On the basis of this research, the authors propose that a regularization of bankruptcy models has been achieved.

arXiv

Mathematically, the execution of an American-style financial derivative is commonly reduced to solving an optimal stopping problem. Breaking the general assumption that the knowledge of the holder is restricted to the price history of the underlying asset, we allow for the disclosure of future information about the terminal price of the asset by modeling it as a Brownian bridge. This model may be used under special market conditions, in particular we focus on what in the literature is known as the "pinning effect", that is, when the price of the asset approaches the strike price of a highly-traded option close to its expiration date. Our main mathematical contribution is in characterizing the solution to the optimal stopping problem when the gain function includes the discount factor. We show how to numerically compute the solution and we analyze the effect of the volatility estimation on the strategy by computing the confidence curves around the optimal stopping boundary. Finally, we compare our method with the optimal exercise time based on a geometric Brownian motion by using real data exhibiting pinning.

arXiv

The time-varying kernel density estimation relies on two free parameters: the bandwidth and the discount factor. We propose to select these parameters so as to minimize a criterion consistent with the traditional requirements of the validation of a probability density forecast. These requirements are both the uniformity and the independence of the so-called probability integral transforms, which are the forecast time-varying cumulated distributions applied to the observations. We thus build a new numerical criterion incorporating both the uniformity and independence properties by the mean of an adapted Kolmogorov-Smirnov statistic. We apply this method to financial markets during the COVID-19 crisis. We determine the time-varying density of daily price returns of several stock indices and, using various divergence statistics, we are able to describe the chronology of the crisis as well as regional disparities. For instance, we observe a more limited impact of COVID-19 on financial markets in China, a strong impact in the US, and a slow recovery in Europe.

arXiv

We analyse the optimal exercise of an executive stock option (ESO) written on a stock whose drift parameter falls to a lower value at a change point, an exponentially distributed random time independent of the Brownian motion driving the stock. Two agents, who do not trade the stock, have differing information on the change point, and seek to optimally exercise the option by maximising its discounted payoff under the physical measure. The first agent has full information, and observes the change point. The second agent has partial information and filters the change point from price observations. This scenario is designed to mimic the positions of two employees of varying seniority, a fully informed executive and a partially informed less senior employee, each of whom receives an ESO. The partial information scenario yields a model under the observation filtration $\widehat{\mathbb{F}}$ in which the stock drift becomes a diffusion driven by the innovations process, an $\widehat{\mathbb{F}}$-Brownian motion also driving the stock under $\widehat{\mathbb{F}}$, and the partial information optimal stopping value function has two spatial dimensions. We rigorously characterise the free boundary PDEs for both agents, establish shape and regularity properties of the associated optimal exercise boundaries, and prove the smooth pasting property in both information scenarios, exploiting some stochastic flow ideas to do so in the partial information case. We develop finite difference algorithms to numerically solve both agents' exercise and valuation problems and illustrate that the additional information of the fully informed agent can result in exercise patterns which exploit the information on the change point, lending credence to empirical studies which suggest that privileged information of bad news is a factor leading to early exercise of ESOs prior to poor stock price performance.

arXiv

Green hydrogen can help to decarbonize transportation, but its power sector interactions are not well understood. It may contribute to integrating variable renewable energy sources if production is sufficiently flexible in time. Using an open-source co-optimization model of the power sector and four options for supplying hydrogen at German filling stations, we find a trade-off between energy efficiency and temporal flexibility: for lower shares of renewables and hydrogen, more energy-efficient and less flexible small-scale on-site electrolysis is optimal. For higher shares of renewables and/or hydrogen, more flexible but less energy-efficient large-scale hydrogen supply chains gain importance as they allow disentangling hydrogen production from demand via storage. Liquid hydrogen emerges as particularly beneficial, followed by liquid organic hydrogen carriers and gaseous hydrogen. Large-scale hydrogen supply chains can deliver substantial power sector benefits, mainly through reduced renewable surplus generation. Energy modelers and system planners should consider the distinct flexibility characteristics of hydrogen supply chains in more detail when assessing the role of green hydrogen in future energy transition scenarios.

arXiv

We study and SI-type model, with the possibility of vaccination, where the population is partitioned between pro-vaxxers and anti-vaxxers. We show that, during the outbreak of a disease, segregating people that are against vaccination from the rest of the population decreases the speed of recovery and may increase the number of cases. Then, we include endogenous choices based on the tradeoff between the cost of vaccinating and the risk of getting infected. We show that the results remain valid under endogenous choices, unless people are too flexible in determining their identity towards vaccination.

arXiv

The objective of this work is twofold: to expand the depression models proposed by Tobin and analyse a supply shock, such as the Covid-19 pandemic, in this Keynesian conceptual environment. The expansion allows us to propose the evolution of all endogenous macroeconomic variables. The result obtained is relevant due to its theoretical and practical implications. A quantity or Keynesian adjustment to the shock produces a depression through the effect on aggregate demand. This depression worsens in the medium/long-term. It is accompanied by increases in inflation, inflation expectations and the real interest rate. A stimulus tax policy is also recommended, as well as an active monetary policy to reduce real interest rates. On the other hand, the pricing or Marshallian adjustment foresees a more severe and rapid depression in the short-term. There would be a reduction in inflation and inflation expectations, and an increase in the real interest rates. The tax or monetary stimulus measures would only impact inflation. This result makes it possible to clarify and assess the resulting depression, as well as propose policies. Finally, it offers conflicting predictions that allow one of the two models to be falsified.

arXiv

We consider a nonlinear random walk which, in each time step, is free to choose its own transition probability within a neighborhood (w.r.t. Wasserstein distance) of the transition probability of a fixed L\'evy process. In analogy to the classical framework we show that, when passing from discrete to continuous time via a scaling limit, this nonlinear random walk gives rise to a nonlinear semigroup. We explicitly compute the generator of this semigroup and corresponding PDE as a perturbation of the generator of the initial L\'evy process.

SSRN

I show that under-reaction is a robust response to model mis-specification rewarded by financial markets, rather than an â€œirrationalâ€ attitude that leads to extinction. Under-reacting prediction schemes guarantee predictions as accurate as Bayes' in well-specified learning problems and beat Bayes' in many mis-specified learning environments.Therefore, if a Bayesian agent and an under-reacting agent with the same information trade in the same market, there are no paths on which the under-reacting agent loses all his wealth against the Bayesian, while there is a large class of mis-specified learning settings in which the Bayesian agent loses all his wealth against the under-reacting agent almost surely.

arXiv

This paper considers the pricing of equity-linked life insurance contracts with death and survival benefits in a general model with multiple stochastic risk factors: interest rate, equity, volatility, unsystematic and systematic mortality. We price the equity-linked contracts by assuming that the insurer hedges the risks to reduce the local variance of the net asset value process and requires a compensation for the non-hedgeable part of the liability in the form of an instantaneous standard deviation risk margin. The price can then be expressed as the solution of a system of non-linear partial differential equations. We reformulate the problem as a backward stochastic differential equation with jumps and solve it numerically by the use of efficient neural networks. Sensitivity analysis is performed with respect to initial parameters and an analysis of the accuracy of the approximation of the true price with our neural networks is provided.

arXiv

In most OTC markets, a small number of market makers provide liquidity to other market participants. More precisely, for a list of assets, they set prices at which they agree to buy and sell. Market makers face therefore an interesting optimization problem: they need to choose bid and ask prices for making money while mitigating the risk associated with holding inventory in a volatile market. Many market making models have been proposed in the academic literature, most of them dealing with single-asset market making whereas market makers are usually in charge of a long list of assets. The rare models tackling multi-asset market making suffer however from the curse of dimensionality when it comes to the numerical approximation of the optimal quotes. The goal of this paper is to propose a dimensionality reduction technique to address multi-asset market making by using a factor model. Moreover, we generalize existing market making models by the addition of an important feature: the existence of different transaction sizes and the possibility for the market makers in OTC markets to answer different prices to requests with different sizes.