# Research articles for the 2020-05-24

arXiv

We present a general framework for portfolio risk management in discrete time, based on a replicating martingale. This martingale is learned from a finite sample in a supervised setting. The model learns the features necessary for an effective low-dimensional representation, overcoming the curse of dimensionality common to function approximation in high-dimensional spaces. We show results based on polynomial and neural network bases. Both offer superior results to naive Monte Carlo methods and other existing methods like least-squares Monte Carlo and replicating portfolios.

arXiv

In this paper we show how to approximate the transition density of a CARMA(p, q) model driven by means of a time changed Brownian Motion based on the Gauss-Laguerre quadrature. We then provide an analytical formula for option prices when the log price follows a CARMA(p, q) model. We also propose an estimation procedure based on the approximated likelihood density.

arXiv

By applying network analysis techniques to large input-output system, we identify key sectors in the local/regional economy. We overcome the limitations of traditional measures of centrality by using random-walk based measures, as an extension of Blochl et al. (2011). These are more appropriate to analyze very dense networks, i.e. those in which most nodes are connected to all other nodes. These measures also allow for the presence of recursive ties (loops), since these are common in economic systems (depending to the level of aggregation, most firms buy from and sell to other firms in the same industrial sector). The centrality measures we present are well suited for capturing sectoral effects missing from the usual output and employment multipliers. We also develop an R package (xtranat) for the processing of data from IMPLAN(R) models and for computing the newly developed measures.

arXiv

We analyze how investor expectations about economic growth and stock returns changed during the February-March 2020 stock market crash induced by the COVID-19 pandemic, as well as during the subsequent partial stock market recovery. We surveyed retail investors who are clients of Vanguard at three points in time: (i) on February 11-12, around the all-time stock market high, (ii) on March 11-12, after the stock market had collapsed by over 20\%, and (iii) on April 16-17, after the market had rallied 25\% from its lowest point. Following the crash, the average investor turned more pessimistic about the short-run performance of both the stock market and the real economy. Investors also perceived higher probabilities of both further extreme stock market declines and large declines in short-run real economic activity. In contrast, investor expectations about long-run (10-year) economic and stock market outcomes remained largely unchanged, and, if anything, improved. Disagreement among investors about economic and stock market outcomes also increased substantially following the stock market crash, with the disagreement persisting through the partial market recovery. Those respondents who were the most optimistic in February saw the largest decline in expectations, and sold the most equity. Those respondents who were the most pessimistic in February largely left their portfolios unchanged during and after the crash.

arXiv

This paper is an attempt to study fundamentally the valuation of insurance contracts. We start from the observation that insurance contracts are inherently linked to financial markets, be it via interest rates, or -- as in hybrid products, equity-linked life insurance and variable annuities -- directly to stocks or indices. By defining portfolio strategies on an insurance portfolio and combining them with financial trading strategies we arrive at the notion of insurance-finance arbitrage (IFA). A fundamental theorem provides two sufficient conditions for presence or absence of IFA, respectively. For the first one it utilizes the conditional law of large numbers and risk-neutral valuation. As a key result we obtain a simple valuation rule, called QP-rule, which is market consistent and excludes IFA.

Utilizing the theory of enlargements of filtrations we construct a tractable framework for general valuation results, working under weak assumptions.

The generality of the approach allows to incorporate many important aspects, like mortality risk or dependence of mortality and stock markets which is of utmost importance in the recent corona crisis. For practical applications, we provide an affine formulation which leads to explicit valuation formulas for a large class of hybrid products.

arXiv

This paper presents a novel and direct approach to price boundary and final-value problems, corresponding to barrier options, using forward deep learning to solve forward-backward stochastic differential equations (FBSDEs). Barrier instruments are instruments that expire or transform into another instrument if a barrier condition is satisfied before maturity; otherwise they perform like the instrument without the barrier condition. In the PDE formulation, this corresponds to adding boundary conditions to the final value problem. The deep BSDE methods developed so far have not addressed barrier/boundary conditions directly. We extend the forward deep BSDE to the barrier condition case by adding nodes to the computational graph to explicitly monitor the barrier conditions for each realization of the dynamics as well as nodes that preserve the time, state variables, and trading strategy value at barrier breach or at maturity otherwise. Given these additional nodes in the computational graph, the forward loss function quantifies the replication of the barrier or final payoff according to a chosen risk measure such as squared sum of differences. The proposed method can handle any barrier condition in the FBSDE set-up and any Dirichlet boundary conditions in the PDE set-up, both in low and high dimensions.

arXiv

Scanner data offer new opportunities for CPI or HICP calculation. They can be obtained from a~wide variety of~retailers (supermarkets, home electronics, Internet shops, etc.) and provide information at the level of~the barcode. One of~advantages of~using scanner data is the fact that they contain complete transaction information, i.e. prices and quantities for every sold item. To use scanner data, it must be carefully processed. After clearing data and unifying product names, products should be carefully classified (e.g. into COICOP 5 or below), matched, filtered and aggregated. These procedures often require creating new IT or writing custom scripts (R, Python, Mathematica, SAS, others). One of~new challenges connected with scanner data is the appropriate choice of~the index formula. In this article we present a~proposal for the implementation of~individual stages of~handling scanner data. We also point out potential problems during scanner data processing and their solutions. Finally, we compare a~large number of~price index methods based on real scanner datasets and we verify their sensitivity on adopted data filtering and aggregating methods.