# Research articles for the 2020-07-05

arXiv

A new approach in stochastic optimization via the use of stochastic gradient Langevin dynamics (SGLD) algorithms, which is a variant of stochastic gradient decent (SGD) methods, allows us to efficiently approximate global minimizers of possibly complicated, high-dimensional landscapes. With this in mind, we extend here the non-asymptotic analysis of SGLD to the case of discontinuous stochastic gradients. We are thus able to provide theoretical guarantees for the algorithm's convergence in (standard) Wasserstein distances for both convex and non-convex objective functions. We also provide explicit upper estimates of the expected excess risk associated with the approximation of global minimizers of these objective functions. All these findings allow us to devise and present a fully data-driven approach for the optimal allocation of weights for the minimization of CVaR of portfolio of assets with complete theoretical guarantees for its performance. Numerical results illustrate our main findings.

arXiv

Traditional non-life reserving models largely neglect the vast amount of information collected over the lifetime of a claim. This information includes covariates describing the policy (e.g. the value of the insured risk), claim cause (e.g. hail) as well as the detailed claim's history (e.g. settlement, payment, involvement lawyer). We present the hierarchical reserving model as a modular framework for integrating a claim's history and claim-specific covariates into the development process. Hierarchical reserving models decompose the joint likelihood of the development process over time. Moreover, they are tailored to the portfolio at hand by adding a layer to the model for each of the registered events (e.g. settlement, payment). Layers are modelled with classical techniques (e.g. generalized linear models) or machine learning methods (e.g. gradient boosting machines) and using claim-specific covariates. As a result of its flexibility, this framework incorporates many existing reserving models, ranging from aggregate models designed for runoff triangles to individual models using claim-specific covariates. This connection allows us to develop a data-driven strategy for choosing between aggregate and individual reserving; an important decision for reserving practitioners that is largely left unexplored in scientific literature. We illustrate our method with a case study on a real life insurance data set. This case study provides new insights in the covariates driving the development of claims and demonstrates the flexibility and robustness of the hierarchical reserving model over time.

arXiv

We propose a novel approach to sentiment data filtering for a portfolio of assets. In our framework, a dynamic factor model drives the evolution of the observed sentiment and allows to identify two distinct components: a long-term component, modeled as a random walk, and a short-term component driven by a stationary VAR(1) process. Our model encompasses alternative approaches available in literature and can be readily estimated by means of Kalman filtering and expectation maximization. This feature makes it convenient when the cross-sectional dimension of the portfolio increases. By applying the model to a portfolio of Dow Jones stocks, we find that the long term component co-integrates with the market principal factor, while the short term one captures transient swings of the market associated with the idiosyncratic components and captures the correlation structure of returns. Using quantile regressions, we assess the significance of the contemporaneous and lagged explanatory power of sentiment on returns finding strong statistical evidence when extreme returns, especially negative ones, are considered. Finally, the lagged relation is exploited in a portfolio allocation exercise.

arXiv

In this paper, using the structural approach is derived a mathematical model of the discrete coupon bond with the provision that allow the holder to demand early redemption at any coupon dates prior to the maturity and based on this model is provided some analysis including min-max and gradient estimates of the bond price. Using these estimates the existence and uniqueness of the default boundaries and some relationships between the design parameters of the discrete coupon bond with early redemption provision are described. Then under some assumptions the existence and uniqueness of the early redemption boundaries is proved and the analytic formula of the bond price is provided using higher binary options. Finally for our bond is provided the analysis on the duration and credit spread, which are used widely in financial reality. Our works provide a design guide of the discrete coupon bond with the early redemption provision

SSRN

The severity of the initial COVID-19 outbreaks varied substantially across the United States. We use this variation to study banks' response to the onset of the COVID-19 crisis. Commercial banks with higher exposure increased their loan supply relatively more in this period. They honored the loan commitment drawdowns by firms and they issued new term loans. To manage the liquidity risk, they increased their cash holdings. This led to an expansion in their size, which was financed mainly by insured deposits. Banks with larger pre-crisis cash holdings managed the liquidity risk without increasing their cash and, as a result, these banks increased their lending more.

arXiv

We estimate the effect of the Coronavirus (Covid-19) pandemic on racial animus, as measured by Google searches and Twitter posts including a commonly used anti-Asian racial slur. Our empirical strategy exploits the plausibly exogenous variation in the timing of the first Covid-19 diagnosis across regions in the United States. We find that the first local diagnosis leads to an immediate increase in racist Google searches and Twitter posts, with the latter mainly coming from existing Twitter users posting the slur for the first time. This increase could indicate a rise in future hate crimes, as we document a strong correlation between the use of the slur and anti-Asian hate crimes using historic data. Moreover, we find that the rise in the animosity is directed at Asians rather than other minority groups and is stronger on days when the connection between the disease and Asians is more salient, as proxied by President Trump's tweets mentioning China and Covid-19 at the same time. In contrast, the negative economic impact of the pandemic plays little role in the initial increase in racial animus. Our results suggest that de-emphasizing the connection between the disease and a particular racial group can be effective in curbing current and future racial animus.

arXiv

The construction of replication strategies for contingent claims in the presence of risk and market friction is a key problem of financial engineering. In real markets, continuous replication, such as in the model of Black, Scholes and Merton, is not only unrealistic but it is also undesirable due to high transaction costs. Over the last decades stochastic optimal-control methods have been developed to balance between effective replication and losses. More recently, with the rise of artificial intelligence, temporal-difference Reinforcement Learning, in particular variations of $Q$-learning in conjunction with Deep Neural Networks, have attracted significant interest. From a practical point of view, however, such methods are often relatively sample inefficient, hard to train and lack performance guarantees. This motivates the investigation of a stable benchmark algorithm for hedging. In this article, the hedging problem is viewed as an instance of a risk-averse contextual $k$-armed bandit problem, for which a large body of theoretical results and well-studied algorithms are available. We find that the $k$-armed bandit model naturally fits to the $P\&L$ formulation of hedging, providing for a more accurate and sample efficient approach than $Q$-learning and reducing to the Black-Scholes model in the absence of transaction costs and risks.

SSRN

The objective of this study is to determine the impact of COVID-19 on the performance of Pakistani Stock Market. This study uses the data of COVID-19 related positive cases, fatalities, recovers and the closing prices of PSX 100 index of the first half of 2020. The findings of the study suggest that only COVID-19 recoveries are influencing the performance of the index and the daily positive cases and fatalities are insignificantly related to the performance. Further studies can be performed by incorporating other variables such as economic growth, interest rate and inflation rate along with the COVID-19 related variables at a cross-country level.

SSRN

There have been stark differences in the ability of low-income and high-income individuals to protect themselves during the COVID-19 pandemic. We show that debt burdens contribute to this inequity by disproportionately increasing the cost to low-income individuals of reducing mobility during pandemics. Using a triple difference specification, we document that low-income individuals with high debt burdens are 6% more mobile than less constrained high-income individuals after the spread of coronavirus in the United States. Furthermore, this effect is exacerbated for African-American borrowers. Additionally, we provide suggestive evidence that this debt burden channel could have contributed to 2.16% more COVID-19 cases.

arXiv

This paper introduces a new functional optimization approach to portfolio optimization problems by treating the unknown weight vector as a function of past values instead of treating them as fixed unknown coefficients in the majority of studies. We first show that the optimal solution, in general, is not a constant function. We give the optimal conditions for a vector function to be the solution, and hence give the conditions for a plug-in solution (replacing the unknown mean and variance by certain estimates based on past values) to be optimal. After showing that the plug-in solutions are sub-optimal in general, we propose gradient-ascent algorithms to solve the functional optimization for mean-variance portfolio management with theorems for convergence provided. Simulations and empirical studies show that our approach can perform significantly better than the plug-in approach.

arXiv

We propose to derive deviation measures through the Minkowski gauge of a given set of acceptable positions. We show that given a suitable acceptance set, any positive homogeneous deviation measure can be accommodated in our framework. In doing so, we provide a new interpretation for such measures, namely, that they quantify how much one must shrink a position for it to become acceptable. In particular, the Minkowski gauge of a set which is convex, stable under scalar addition, and radially bounded at non-constants, is a generalized deviation measure. Furthermore, we explore the relations existing between mathematical and financial properties attributable to an acceptance set on the one hand, and the corresponding properties of the induced measure on the other. In addition, we show that any positive homogeneous monetary risk measure can be represented through Minkowski gauges. Dual characterizations in terms of polar sets and support functionals are provided.

arXiv

We investigate the use of quantum computers for building a portfolio out of a universe of U.S. listed, liquid equities that contains an optimal set of stocks. Starting from historical market data, we look at various problem formulations on the D-Wave Systems Inc. D-Wave 2000Q(TM) System (hereafter called DWave) to find the optimal risk vs return portfolio; an optimized portfolio based on the Markowitz formulation and the Sharpe ratio, a simplified Chicago Quantum Ratio (CQR), then a new Chicago Quantum Net Score (CQNS). We approach this first classically, then by our new method on DWave. Our results show that practitioners can use a DWave to select attractive portfolios out of 40 U.S. liquid equities.

arXiv

Applications of the quantum algorithm for Monte Carlo simulation to pricing of financial derivatives have been discussed in previous papers. However, up to now, the pricing model discussed in such papers is Black-Scholes model, which is important but simple. Therefore, it is motivating to consider how to implement more complex models used in practice in financial institutions. In this paper, we then consider the local volatility (LV) model, in which the volatility of the underlying asset price depends on the price and time. We present two types of implementation. One is the register-per-RN way, which is adopted in most of previous papers. In this way, each of random numbers (RNs) required to generate a path of the asset price is generated on a separated register, so the required qubit number increases in proportion to the number of RNs. The other is the PRN-on-a-register way, which is proposed in the author's previous work. In this way, a sequence of pseudo-random numbers (PRNs) generated on a register is used to generate paths of the asset price, so the required qubit number is reduced with a trade-off against circuit depth. We present circuit diagrams for these two implementations in detail and estimate required resources: qubit number and T-count.

arXiv

It has been decades since the academic world of ruin theory defined the insolvency of an insurance company as the time when its surplus falls below zero. This simplification, however, needs careful adaptions to imitate the real-world liquidation process. Inspired by Broadie et al. (2007) and Li et al. (2020), this paper uses a three-barrier model to describe the financial stress towards bankruptcy of an insurance company. The financial status of the insurer is divided into solvent, insolvent and liquidated three states, where the insurer's surplus process at the state of solvent and insolvent is modelled by two spectrally negative L\'{e}vy processes, which have been taken as good candidates to model insurance risks. We provide a rigorous definition of the time of liquidation ruin in this three-barrier model. By adopting the techniques of excursions in the fluctuation theory, we study the joint distribution of the time of liquidation, the surplus at liquidation and the historical high of the surplus until liquidation, which generalizes the known results on the classical expected discounted penalty function in Gerber and Shiu (1998). The results have semi-explicit expressions in terms of the scale functions and the L\'{e}vy triplets associated with the two underlying L\'{e}vy processes. The special case when the two underlying L\'{e}vy processes coincide with each other is also studied, where our results are expressed compactly via only the scale functions. The corresponding results have good consistency with the existing literatures on Parisian ruin with (or without) a lower barrier in Landriault et al. (2014), Baurdoux et al. (2016) and Frostig and Keren-Pinhasik (2019). Besides, numerical examples are provided to illustrate the underlying features of liquidation ruin.

SSRN

This paper compares the performance of safe haven assets during two stressful stock market regimes â€" the 2008 Global Financial Crisis (GFC) and COVID-19 pandemic. Our analysis across the ten largest economies in the world shows that the traditional choice, gold, acts as a safe haven during the GFC but fails to protect investor wealth during COVID. Our results suggest that investors might have lost trust in gold. Furthermore, silver does not serve as a safe haven during either crisis, while US Treasuries and the Swiss Franc generally act as strong safe havens during both crises. The US dollar acts as a safe haven during the GFC for all the countries except for the United States, but only for China and India during COVID. Finally, Bitcoin does not serve as a safe haven for all countries during COVID; however, the largest stablecoin, Tether, serves as a strong safe haven. Thus, our results suggest that, during a pandemic, investors should prefer liquid and stable assets rather than gold.