Research articles for the 2020-09-23

A Deep Learning Approach for Dynamic Balance Sheet Stress Testing
Anastasios Petropoulos,Vassilis Siakoulis,Konstantinos P. Panousis,Theodoros Christophides,Sotirios Chatzis
arXiv

In the aftermath of the financial crisis, supervisory authorities have considerably improved their approaches in performing financial stress testing. However, they have received significant criticism by the market participants due to the methodological assumptions and simplifications employed, which are considered as not accurately reflecting real conditions. First and foremost, current stress testing methodologies attempt to simulate the risks underlying a financial institution's balance sheet by using several satellite models, making their integration a really challenging task with significant estimation errors. Secondly, they still suffer from not employing advanced statistical techniques, like machine learning, which capture better the nonlinear nature of adverse shocks. Finally, the static balance sheet assumption, that is often employed, implies that the management of a bank passively monitors the realization of the adverse scenario, but does nothing to mitigate its impact. To address the above mentioned criticism, we introduce in this study a novel approach utilizing deep learning approach for dynamic balance sheet stress testing. Experimental results give strong evidence that deep learning applied in big financial/supervisory datasets create a state of the art paradigm, which is capable of simulating real world scenarios in a more efficient way.



Analysis of the main factors for the configuration of green ports in Colombia
Abraham Londono Pineda,Tatiana Arias Naranjo,Jose Alejandro Cano Arenas
arXiv

This study analyzes the factors affecting the configuration and consolidation of green ports in Colombia. For this purpose a case stady of maritime cargo ports of Cartagena, Barranquilla and Santa Marta is performed addressing semiestructured interviews to identify the factors contributing to the consolidation of green ports and the factors guiding the sustainability management in the ports that have not yet been certified as green ports. The results show that environmental regulations are atarting point not the key factor to consolidate asgreen ports. As a conclusions, the conversion of Colombian to green ports should not be limited to the attaiment of certifications, such as Ecoport certification, but should ensure the contribution to sustainable development through economic, social and environmental dimensions and the achievement of the SDGs



Ants, robots, humans: a self-organizing, goal-driven modeling approach
Martin Jaraiz
arXiv

Most of the grand challenges of humanity today involve complex agent-based systems, such as epidemiology, economics or ecology. However, remains as a pending task the challenge of identifying the general principles underlying the self-organizing capabilities of those complex systems. This article presents a novel modeling approach capable to self-deploy both the system structure and the activities for goal-driven agents that can take appropriate actions to achieve their goals. Humans, robots, and animals are all endowed with this type of behavior. Self-organization is shown to emerge from the decisions of a common rational activity algorithm based on the information of a system-specific goals dependency network. The unique self-deployment feature of this systematic approach can boost considerably the range and depth of application of agent-based modeling.



CoVaR with volatility clustering, heavy tails and non-linear dependence
Michele Leonardo Bianchi,Giovanni De Luca,Giorgia Rivieccio
arXiv

In this paper we estimate the conditional value-at-risk by fitting different multivariate parametric models capturing some stylized facts about multivariate financial time series of equity returns: heavy tails, negative skew, asymmetric dependence, and volatility clustering. While the volatility clustering effect is got by AR-GARCH dynamics of the GJR type, the other stylized facts are captured through non-Gaussian multivariate models and copula functions. The CoVaR$^{\leq}$ is computed on the basis on the multivariate normal model, the multivariate normal tempered stable (MNTS) model, the multivariate generalized hyperbolic model (MGH) and four possible copula functions. These risk measure estimates are compared to the CoVaR$^{=}$ based on the multivariate normal GARCH model. The comparison is conducted by backtesting the competitor models over the time span from January 2007 to March 2020. In the empirical study we consider a sample of listed banks of the euro area belonging to the main or to the additional global systemically important banks (GSIBs) assessment sample.



Efficient Portfolios
Keith A. Lewis
arXiv

Given two random realized returns on an investment, which is to be preferred? This is a fundamental problem in finance that has no definitive solution except in the case one investment always returns more than the other. In 1952 Markowitz and Roy introduced the following criterion for risk vs. return in portfolio selection: if two portfolios have the same expected realized return then prefer the one with smaller variance. An efficient portfolio has the least variance among all portfolios having the same expected realized return.

The primary contribution of this short note is observation that the CAPM formula holds for realized returns as random variables, not just their expectations. This follows directly from writing down a mathematical model for one period investments.



Optimal hedging of a perpetual American put with a single trade
Cheng Cai,Tiziano De Angelis,Jan Palczewski
arXiv

It is well-known that using delta hedging to hedge financial options is not feasible in practice. Traders often rely on discrete-time hedging strategies based on fixed trading times or fixed trading prices (i.e., trades only occur if the underlying asset's price reaches some predetermined values). Motivated by this insight and with the aim of obtaining explicit solutions, we consider the seller of a perpetual American put option who can hedge her portfolio once until the underlying stock price leaves a certain range of values $(a,b)$. We determine optimal trading boundaries as functions of the initial stock holding, and an optimal hedging strategy for a bond/stock portfolio. Optimality here refers to the variance of the hedging error at the (random) time when the stock leaves the interval $(a,b)$. Our study leads to analytical expressions for both the optimal boundaries and the optimal stock holding, which can be evaluated numerically with no effort.



Pricing Cryptocurrency Options
Ai Jun Hou,Weining Wang,Cathy Y. H. Chen,Wolfgang Karl Härdle
arXiv

Cryptocurrencies, especially Bitcoin (BTC), which comprise a new digital asset class, have drawn extraordinary worldwide attention. The characteristics of the cryptocurrency/BTC include a high level of speculation, extreme volatility and price discontinuity. We propose a pricing mechanism based on a stochastic volatility with a correlated jump (SVCJ) model and compare it to a flexible co-jump model by Bandi and Ren\`o (2016). The estimation results of both models confirm the impact of jumps and co-jumps on options obtained via simulation and an analysis of the implied volatility curve. We show that a sizeable proportion of price jumps are significantly and contemporaneously anti-correlated with jumps in volatility. Our study comprises pioneering research on pricing BTC options. We show how the proposed pricing mechanism underlines the importance of jumps in cryptocurrency markets.



Qlib: An AI-oriented Quantitative Investment Platform
Xiao Yang,Weiqing Liu,Dong Zhou,Jiang Bian,Tie-Yan Liu
arXiv

Quantitative investment aims to maximize the return and minimize the risk in a sequential trading period over a set of financial instruments. Recently, inspired by rapid development and great potential of AI technologies in generating remarkable innovation in quantitative investment, there has been increasing adoption of AI-driven workflow for quantitative research and practical investment. In the meantime of enriching the quantitative investment methodology, AI technologies have raised new challenges to the quantitative investment system. Particularly, the new learning paradigms for quantitative investment call for an infrastructure upgrade to accommodate the renovated workflow; moreover, the data-driven nature of AI technologies indeed indicates a requirement of the infrastructure with more powerful performance; additionally, there exist some unique challenges for applying AI technologies to solve different tasks in the financial scenarios. To address these challenges and bridge the gap between AI technologies and quantitative investment, we design and develop Qlib that aims to realize the potential, empower the research, and create the value of AI technologies in quantitative investment.



Semi-analytic pricing of double barrier options with time-dependent barriers and rebates at hit
Andrey Itkin,Dmitry Muravey
arXiv

We continue a series of papers devoted to construction of semi-analytic solutions for barrier options. These options are written on underlying following some simple one-factor diffusion model, but all the parameters of the model as well as the barriers are time-dependent. We managed to show that these solutions are systematically more efficient for pricing and calibration than, eg., the corresponding finite-difference solvers. In this paper we extend this technique to pricing double barrier options and present two approaches to solving it: the General Integral transform method and the Heat Potential method. Our results confirm that for double barrier options these semi-analytic techniques are also more efficient than the traditional numerical methods used to solve this type of problems.



Simulation-based optimisation of the timing of loan recovery across different portfolios
Arno Botha,Conrad Beyers,Pieter de Villiers
arXiv

A novel procedure is presented for the objective comparison and evaluation of a bank's decision rules in optimising the timing of loan recovery. This procedure is based on finding a delinquency threshold at which the financial loss of a loan portfolio (or segment therein) is minimised. Our procedure is an expert system that incorporates the time value of money, costs, and the fundamental trade-off between accumulating arrears versus forsaking future interest revenue. Moreover, the procedure can be used with different delinquency measures (other than payments in arrears), thereby allowing an indirect comparison of these measures. We demonstrate the system across a range of credit risk scenarios and portfolio compositions. The computational results show that threshold optima can exist across all reasonable values of both the payment probability (default risk) and the loss rate (loan collateral). In addition, the procedure reacts positively to portfolios afflicted by either systematic defaults (such as during an economic downturn) or episodic delinquency (i.e., cycles of curing and re-defaulting). In optimising a portfolio's recovery decision, our procedure can better inform the quantitative aspects of a bank's collection policy than relying on arbitrary discretion alone.



Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications
Nassim Nicholas Taleb
arXiv

The monograph investigates the misapplication of conventional statistical techniques to fat tailed distributions and looks for remedies, when possible.

Switching from thin tailed to fat tailed distributions requires more than "changing the color of the dress". Traditional asymptotics deal mainly with either n=1 or $n=\infty$, and the real world is in between, under of the "laws of the medium numbers" --which vary widely across specific distributions. Both the law of large numbers and the generalized central limit mechanisms operate in highly idiosyncratic ways outside the standard Gaussian or Levy-Stable basins of convergence.

A few examples:

+ The sample mean is rarely in line with the population mean, with effect on "naive empiricism", but can be sometimes be estimated via parametric methods.

+ The "empirical distribution" is rarely empirical.

+ Parameter uncertainty has compounding effects on statistical metrics.

+ Dimension reduction (principal components) fails.

+ Inequality estimators (GINI or quantile contributions) are not additive and produce wrong results.

+ Many "biases" found in psychology become entirely rational under more sophisticated probability distributions

+ Most of the failures of financial economics, econometrics, and behavioral economics can be attributed to using the wrong distributions.

This book, the first volume of the Technical Incerto, weaves a narrative around published journal articles.



Stock Price Prediction Using Machine Learning and LSTM-Based Deep Learning Models
Sidra Mehtab,Jaydip Sen,Abhishek Dutta
arXiv

Prediction of stock prices has been an important area of research for a long time. While supporters of the efficient market hypothesis believe that it is impossible to predict stock prices accurately, there are formal propositions demonstrating that accurate modeling and designing of appropriate variables may lead to models using which stock prices and stock price movement patterns can be very accurately predicted. In this work, we propose an approach of hybrid modeling for stock price prediction building different machine learning and deep learning-based models. For the purpose of our study, we have used NIFTY 50 index values of the National Stock Exchange (NSE) of India, during the period December 29, 2014 till July 31, 2020. We have built eight regression models using the training data that consisted of NIFTY 50 index records during December 29, 2014 till December 28, 2018. Using these regression models, we predicted the open values of NIFTY 50 for the period December 31, 2018 till July 31, 2020. We, then, augment the predictive power of our forecasting framework by building four deep learning-based regression models using long-and short-term memory (LSTM) networks with a novel approach of walk-forward validation. We exploit the power of LSTM regression models in forecasting the future NIFTY 50 open values using four different models that differ in their architecture and in the structure of their input data. Extensive results are presented on various metrics for the all the regression models. The results clearly indicate that the LSTM-based univariate model that uses one-week prior data as input for predicting the next week open value of the NIFTY 50 time series is the most accurate model.



The characteristic function of Gaussian stochastic volatility models: an analytic expression
Eduardo Abi Jaber
arXiv

Stochastic volatility models based on Gaussian processes, like fractional Brownian motion, are able to reproduce important stylized facts of financial markets such as rich autocorrelation structures, persistence and roughness of sample paths. This is made possible by virtue of the flexibility introduced in the choice of the covariance function of the Gaussian process. The price to pay is that, in general, such models are no longer Markovian nor semimartingales, which limits their practical use. We derive, in two different ways, an explicit analytic expression for the joint characteristic function of the log-price and its integrated variance in general Gaussian stochastic volatility models. Such analytic expression can be approximated by closed form matrix expressions stemming from Wishart distributions. This opens the door to fast approximation of the joint density and pricing of derivatives on both the stock and its realized variance using Fourier inversion techniques. In the context of rough volatility modeling, our results apply to the (rough) fractional Stein-Stein model and provide the first analytic formulae for option pricing known to date, generalizing that of Stein-Stein, Sch{\"o}bel-Zhu and a special case of Heston.