Research articles for the 2019-05-26

How big should a Stress Shock be?
David G Maher

Stress shocks are often calculated as multiples of the standard deviation of a history set. This paper investigates how many standard deviations are required to guarantee that this shock exceeds any observation within the history set, given the additional constraint of kurtosis. The results of this analysis are then used to validate the shocks produced by some stress test models, in particular that of Brace-Lauer-Rado. A secondary application of our results is to investigate three known extensions of Chebyshev's Inequality where the kurtosis is known. It is found that our results give a tighter bound than the well-known inequalities.

Learning Choice Functions: Concepts and Architectures
Karlson Pfannschmidt,Pritha Gupta,Eyke Hüllermeier

We study the problem of learning choice functions, which play an important role in various domains of application, most notably in the field of economics. Formally, a choice function is a mapping from sets to sets: Given a set of choice alternatives as input, a choice function identifies a subset of most preferred elements. Learning choice functions from suitable training data comes with a number of challenges. For example, the sets provided as input and the subsets produced as output can be of any size. Moreover, since the order in which alternatives are presented is irrelevant, a choice function should be symmetric. Perhaps most importantly, choice functions are naturally context-dependent, in the sense that the preference in favor of an alternative may depend on what other options are available. We formalize the problem of learning choice functions and present two general approaches based on two representations of context-dependent utility functions. Both approaches are instantiated by means of appropriate neural network architectures, and their performance is demonstrated on suitable benchmark tasks.

Machine Learning Tree and Exact Integration for Pricing American Options in High Dimension
Ludovic Goudenège,Andrea Molent,Antonino Zanette

In this paper we modify the Gaussian Process Regression Monte Carlo (GPR-MC) method introduced by Gouden\`ege et al. proposing two efficient techniques which allow one to compute the price of American basket options. In particular, we consider basket of assets that follow a Black-Scholes dynamics. The proposed techniques, called GPR Tree (GRP-Tree) and GPR Exact Integration (GPR-EI), are both based on Machine Learning, exploited together with binomial trees or with a closed formula for integration. Moreover, these two methods solve the backward dynamic programming problem considering a Bermudan approximation of the American option. On the exercise dates, the value of the option is first computed as the maximum between the exercise value and the continuation value and then approximated by means of Gaussian Process Regression. Both the two methods derive from the GPR-MC method and they mainly differ in the method used to approximate the continuation value: a single step of binomial tree or integration according to the probability density of the process. Numerical results show that these two methods are accurate and reliable and improve the results of the GPR-MC method in handling American options on very large baskets of assets.