Research articles for the 2021-01-31

A deep learning algorithm for optimal investment strategies
Daeyung Gim,Hyungbin Park

This paper treats the Merton problem how to invest in safe assets and risky assets to maximize an investor's utility, given by investment opportunities modeled by a $d$-dimensional state process. The problem is represented by a partial differential equation with optimizing term: the Hamilton-Jacobi-Bellman equation. The main purpose of this paper is to solve partial differential equations derived from the Hamilton-Jacobi-Bellman equations with a deep learning algorithm: the Deep Galerkin method, first suggested by Sirignano and Spiliopoulos (2018). We then apply the algorithm to get the solution of the PDE based on some model settings and compare with the one from the finite difference method.

A new class of tail dependence measures and their maximization
Takaaki Koike,Shogo Kato,Marius Hofert

A new class of measures of bivariate tail dependence is proposed, which is defined as a limit of a measure of concordance of the underlying copula restricted to the tail region of interest. The proposed tail dependence measures include tail dependence coefficients as special cases, but capture the extremal relationship between random variables not only along the diagonal but also along all the angles weighted by the so-called tail generating measure. As a result, the proposed tail dependence measures overcome the issue that the tail dependence coefficients underestimate the extent of extreme co-movements. We also consider the so-called maximal and minimal tail dependence measures, defined as the maximum and minimum of the tail dependence measures among all tail generating measures for a given copula. It turns out that the minimal tail dependence measure coincides with the tail dependence coefficient, and the maximal tail dependence measure overestimates the degree of extreme co-movements. We investigate properties, representations and examples of the proposed tail dependence measures, and their performance is demonstrated in a series of numerical experiments. For fair assessment of tail dependence and stability of estimation under small sample size, we support the use of tail dependence measures weighted over all angles compared with maximal and minimal ones.

Estimating value at risk and conditional tail expectation for extreme and aggregate risks
Suman Thapa,Yiqiang Q. Zhao

In this paper, we investigate risk measures such as value at risk (VaR) and the conditional tail expectation (CTE) of the extreme (maximum and minimum) and the aggregate (total) of two dependent risks. In finance, insurance and the other fields, when people invest their money in two or more dependent or independent markets, it is very important to know the extreme and total risk before the investment. To find these risk measures for dependent cases is quite challenging, which has not been reported in the literature to the best of our knowledge. We use the FGM copula for modelling the dependence as it is relatively simple for computational purposes and has empirical successes. The marginal of the risks are considered as exponential and pareto, separately, for the case of extreme risk and as exponential for the case of the total risk. The effect of the degree of dependency on the VaR and CTE of the extreme and total risks is analyzed. We also make comparisons for the dependent and independent risks. Moreover, we propose a new risk measure called median of tail (MoT) and investigate MoT for the extreme and aggregate dependent risks.

Evaluating the Discrimination Ability of Proper Multivariate Scoring Rules
Carol Alexander,Michael Coulon,Yang Han,Xiaochun Meng

Proper scoring rules are commonly applied to quantify the accuracy of distribution forecasts. Given an observation they assign a scalar score to each distribution forecast, with the the lowest expected score attributed to the true distribution. The energy and variogram scores are two rules that have recently gained some popularity in multivariate settings because their computation does not require a forecast to have parametric density function and so they are broadly applicable. Here we conduct a simulation study to compare the discrimination ability between the energy score and three variogram scores. Compared with other studies, our simulation design is more realistic because it is supported by a historical data set containing commodity prices, currencies and interest rates, and our data generating processes include a diverse selection of models with different marginal distributions, dependence structure, and calibration windows. This facilitates a comprehensive comparison of the performance of proper scoring rules in different settings. To compare the scores we use three metrics: the mean relative score, error rate and a generalised discrimination heuristic. Overall, we find that the variogram score with parameter p=0.5 outperforms the energy score and the other two variogram scores.

Extensions of Random Orthogonal Matrix Simulation for Targetting Kollo Skewness
Carol Alexander,Xiaochun Meng,Wei Wei

Modelling multivariate systems is important for many applications in engineering and operational research. The multivariate distributions under scrutiny usually have no analytic or closed form. Therefore their modelling employs a numerical technique, typically multivariate simulations, which can have very high dimensions. Random Orthogonal Matrix (ROM) simulation is a method that has gained some popularity because of the absence of certain simulation errors. Specifically, it exactly matches a target mean, covariance matrix and certain higher moments with every simulation. This paper extends the ROM simulation algorithm presented by Hanke et al. (2017), hereafter referred to as HPSW, which matches the target mean, covariance matrix and Kollo skewness vector exactly. Our first contribution is to establish necessary and sufficient conditions for the HPSW algorithm to work. Our second contribution is to develop a general approach for constructing admissible values in the HPSW. Our third theoretical contribution is to analyse the effect of multivariate sample concatenation on the target Kollo skewness. Finally, we illustrate the extensions we develop here using a simulation study.

Liquidations: DeFi on a Knife-edge
Daniel Perez,Sam M. Werner,Jiahua Xu,Benjamin Livshits

The trustless nature of permissionless blockchains renders overcollateralization a key safety component relied upon by decentralized finance (DeFi) protocols. Nonetheless, factors such as price volatility may undermine this mechanism. In order to protect protocols from suffering losses, undercollateralized positions can be liquidated. In this paper, we present the first in-depth empirical analysis of liquidations on protocols for loanable funds (PLFs). We examine Compound, one of the most widely used PLFs, for a period starting from its conception to September 2020. We analyze participants' behavior and risk-appetite in particular, to elucidate recent developments in the dynamics of the protocol. Furthermore, we assess how this has changed with a modification in Compound's incentive structure and show that variations of only 3% in an asset's dollar price can result in over 10m USD becoming liquidable. To further understand the implications of this, we investigate the efficiency of liquidators. We find that liquidators' efficiency has improved significantly over time, with currently over 70% of liquidable positions being immediately liquidated. Lastly, we provide a discussion on how a false sense of security fostered by a misconception of the stability of non-custodial stablecoins, increases the overall liquidation risk faced by Compound participants.

Modelling Sovereign Credit Ratings: Evaluating the Accuracy and Driving Factors using Machine Learning Techniques
Bart H.L. Overes,Michel van der Wel

Sovereign credit ratings summarize the creditworthiness of countries. These ratings have a large influence on the economy and the yields at which governments can issue new debt. This paper investigates the use of a Multilayer Perceptron (MLP), Classification and Regression Trees (CART), and an Ordered Logit (OL) model for the prediction of sovereign credit ratings. We show that MLP is best suited for predicting sovereign credit ratings, with an accuracy of 68%, followed by CART (59%) and OL (33%). Investigation of the determining factors shows that roughly the same explanatory variables are important in all models, with regulatory quality, GDP per capita and unemployment rate as common important variables. Consistent with economic theory, a higher regulatory quality and/or GDP per capita are associated with a higher credit rating, while a higher unemployment rate is associated with a lower credit rating.

New Formulations of Ambiguous Volatility with an Application to Optimal Dynamic Contracting
Peter G. Hansen

I introduce novel preference formulations which capture aversion to ambiguity about unknown and potentially time-varying volatility. I compare these preferences with Gilboa and Schmeidler's maxmin expected utility as well as variational formulations of ambiguity aversion. The impact of ambiguity aversion is illustrated in a simple static model of portfolio choice, as well as a dynamic model of optimal contracting under repeated moral hazard. Implications for investor beliefs, optimal design of corporate securities, and asset pricing are explored.

The Behavioral Economics of Intrapersonal Conflict: A Critical Assessment
Sebastian Krügel,Matthias Uhl

Preferences often change -- even in short time intervals -- due to either the mere passage of time (present-biased preferences) or changes in environmental conditions (state-dependent preferences). On the basis of the empirical findings in the context of state-dependent preferences, we critically discuss the Aristotelian view of unitary decision makers in economics and urge a more Heraclitean perspective on human decision-making. We illustrate that the conceptualization of preferences as present-biased or state-dependent has very different normative implications under the Aristotelian view, although both concepts are empirically hard to distinguish. This is highly problematic, as it renders almost any paternalistic intervention justifiable.

The Rise of Multiple Institutional Affiliations in Academia
Hanna Hottenrott,Michael Rose,Cornelia Lawson

This study provides the first systematic, international, large-scale evidence on the extent and nature of multiple institutional affiliations on journal publications. Studying more than 15 million authors and 22 million articles from 40 countries we document that: In 2019, almost one in three articles was (co-)authored by authors with multiple affiliations and the share of authors with multiple affiliations increased from around 10% to 16% since 1996. The growth of multiple affiliations is prevalent in all fields and it is stronger in high impact journals. About 60% of multiple affiliations are between institutions from within the academic sector. International co-affiliations, which account for about a quarter of multiple affiliations, most often involve institutions from the United States, China, Germany and the United Kingdom, suggesting a core-periphery network. Network analysis also reveals a number communities of countries that are more likely to share affiliations. We discuss potential causes and show that the timing of the rise in multiple affiliations can be linked to the introduction of more competitive funding structures such as 'excellence initiatives' in a number of countries. We discuss implications for science and science policy.

Time-varying volatility in Bitcoin market and information flow at minute-level frequency
Irena Barjašić,Nino Antulov-Fantulin

In this paper, we analyze the time-series of minute price returns on the Bitcoin market through the statistical models of generalized autoregressive conditional heteroskedasticity (GARCH) family. Several mathematical models have been proposed in finance, to model the dynamics of price returns, each of them introducing a different perspective on the problem, but none without shortcomings. We combine an approach that uses historical values of returns and their volatilities - GARCH family of models, with a so-called "Mixture of Distribution Hypothesis", which states that the dynamics of price returns are governed by the information flow about the market. Using time-series of Bitcoin-related tweets and volume of transactions as external information, we test for improvement in volatility prediction of several GARCH model variants on a minute level Bitcoin price time series. Statistical tests show that the simplest GARCH(1,1) reacts the best to the addition of external signal to model volatility process on out-of-sample data.