Research articles for the 2021-01-18

A Framework of Multivariate Utility Optimization with General Benchmarks
Zongxia Liang,Yang Liu,Litian Zhang

Benchmarks in the utility function have various interpretations, including performance guarantees and risk constraints in fund contracts and reference levels in cumulative prospect theory. In most literature, benchmarks are a deterministic constant or a fraction of the underlying wealth; as such, the utility is still a univariate function of the wealth. In this paper, we propose a framework of multivariate utility optimization with general benchmark variables, which include stochastic reference levels as typical examples. The utility is state-dependent and the objective is no longer distribution-invariant. We provide the optimal solution(s) and fully investigate the issues of well-posedness, feasibility, finiteness and attainability. The discussion does not require many classic conditions and assumptions, e.g., the Lagrange multiplier always exists. Moreover, several surprising phenomena and technical difficulties may appear: (i) non-uniqueness of the optimal solutions, (ii) various reasons for non-existence of the Lagrangian multiplier and corresponding results on the optimal solution, (iii) measurability issues of the concavification of a multivariate utility and the selection of the optimal solutions, and (iv) existence of an optimal solution not decreasing with respect to the pricing kernel. These issues are thoroughly addressed, rigorously proved, completely summarized and insightfully visualized. As an application, the framework is adopted to model and solve a constraint utility optimization problem with state-dependent performance and risk benchmarks.

A Two-Population Mortality Model to Assess Longevity Basis Risk
Selin Özen,Şule Şahin

Index-based hedging solutions are used to transfer the longevity risk to the capital markets. However, mismatches between the liability of the hedger and the hedging instrument cause longevity basis risk. Therefore, an appropriate two-population model to measure and assess the longevity basis risk is required. In this paper, we aim to construct a two-population mortality model to provide an effective hedge against the longevity basis risk. The reference population is modelled by using the Lee-Carter model with the renewal process and exponential jumps proposed by \"Ozen and \c{S}ahin (2020) and the dynamics of the book population are specified. The analysis based on the UK mortality data indicates that the proposed model for the reference population and the common age effect model for the book population provide a better fit compared to the other models considered in the paper. Different two-population models are used to investigate the impact of the sampling risk on the index-based hedge as well as to analyse the risk reduction regarding hedge effectiveness. The results show that the proposed model provides a significant risk reduction when mortality jumps and the sampling risk are taken into account.

AdaVol: An Adaptive Recursive Volatility Prediction Method
Nicklas Werge,Olivier Wintenberger

Quasi-Maximum Likelihood (QML) procedures are theoretically appealing and widely used for statistical inference. While there are extensive references on QML estimation in batch settings, it has attracted little attention in streaming settings until recently. An investigation of the convergence properties of the QML procedure in a general conditionally heteroscedastic time series model is conducted, and the classical batch optimization routines extended to the framework of streaming and large-scale problems. An adaptive recursive estimation routine for GARCH models named AdaVol is presented. The AdaVol procedure relies on stochastic approximations combined with the technique of Variance Targeting Estimation (VTE). This recursive method has computationally efficient properties, while VTE alleviates some convergence difficulties encountered by the usual QML estimation due to a lack of convexity. Empirical results demonstrate a favorable trade-off between AdaVol's stability and the ability to adapt to time-varying estimates for real-life data.

An approximate solution for the power utility optimization under predictable returns
Dmytro Ivasiuk

This work presents an approximate solution of the portfolio choice problem for the investor with a power utility function and the predictable returns. Assuming that asset returns follow the vector autoregressive process with the normally distributed error terms (what is a popular choice in financial literature to model the return path) it comes up with the fact that portfolio gross returns appear to be normally distributed as a linear combination of normal variables. As it was shown, the log-normal distribution seems to be a good proxy of the normal distribution in case if the standard deviation of the last one is way much smaller than the mean. Thus, this fact is exploited to derive the optimal weights. Besides, the paper provides a simulation study comparing the derived result to the well-know numerical solution obtained by using a Taylor series expansion of the value function.

Beating the Market with Generalized Generating Portfolios
Patrick Mijatovic

Stochastic portfolio theory aims at finding relative arbitrages, i.e. trading strategies which outperform the market with probability one. Functionally generated portfolios, which are deterministic functions of the market weights, are an invaluable tool in doing so. Driven by a practitioner point of view, where investment decisions are based upon consideration of various financial variables, we generalize functionally generated portfolios and allow them to depend on continuous-path semimartingales, in addition to the market weights. By means of examples we demonstrate how the inclusion of additional processes can reduce time horizons beyond which relative arbitrage is possible, boost performance of generated portfolios, and how investor preferences and specific investment views can be included in the context of stochastic portfolio theory. Striking is also the construction of a relative arbitrage opportunity which is generated by the volatility of the additional semimartingale. An in-depth empirical analysis of the performance of the proposed strategies confirms our theoretical findings and demonstrates that our portfolios represent profitable investment opportunities even in the presence of transaction costs.

Computation of systemic risk measures: a mixed-integer linear programming approach
Çağın Ararat,Nurtai Meimanjanov

Systemic risk is concerned with the instability of a financial system whose members are interdependent in the sense that the failure of a few institutions may trigger a chain of defaults throughout the system. Recently, several systemic risk measures are proposed in the literature that are used to determine capital requirements for the members subject to joint risk considerations. We address the problem of computing systemic risk measures for systems with sophisticated clearing mechanisms. In particular, we consider the Eisenberg-Noe network model and the Rogers-Veraart network model, where the former one is extended to the case where operating cash flows in the system are unrestricted in sign. We propose novel mixed-integer linear programming problems that can be used to compute clearing vectors for these models. Due to the binary variables in these problems, the corresponding (set-valued) systemic risk measures fail to have convex values in general. We associate nonconvex vector optimization problems to these systemic risk measures and provide theoretical results related to the weighted-sum and minimum step-length scalarizations of these problems under the extended Eisenberg-Noe and Rogers-Veraart models. We test the proposed formulations on computational examples and perform sensitivity analyses with respect to some model-specific and structural parameters.

Deep Reinforcement Learning for Active High Frequency Trading
Antonio Briola,Jeremy Turiel,Riccardo Marcaccioli,Tomaso Aste

We introduce the first end-to-end Deep Reinforcement Learning based framework for active high frequency trading. We train DRL agents to to trade one unit of Intel Corporation stocks by employing the Proximal Policy Optimization algorithm. The training is performed on three contiguous months of high frequency Limit Order Book data. In order to maximise the signal to noise ratio in the training data, we compose the latter by only selecting training samples with largest price changes. The test is then carried out on the following month of data. Hyperparameters are tuned using the Sequential Model Based Optimization technique. We consider three different state characterizations, which differ in the LOB-based meta-features they include. Agents learn trading strategies able to produce stable positive returns in spite of the highly stochastic and non-stationary environment, which is remarkable itself. Analysing the agents' performances on the test data, we argue that the agents are able to create a dynamic representation of the underlying environment highlighting the occasional regularities present in the data and exploiting them to create long-term profitable trading strategies.

Diagnosis of systemic risk and contagion across financial sectors
Sayuj Choudhari,Richard Licheng Zhu

In normal times, it is assumed that financial institutions operating in non-overlapping sectors have complementary and distinct outcomes, typically reflected in mostly uncorrelated outcomes and asset returns. Such is the reasoning behind common "free lunches" to be had in investing, like diversifying assets across equity and bond sectors. Unfortunately, the recurrence of crises like the Great Financial Crisis of 2007-2008 demonstrate that such convenient assumptions often break down, with dramatic consequences for all financial actors. In hindsight, the emergence of systemic risk (as exemplified by failure in one part of a system spreading to ostensibly unrelated parts of the system) has been explained by narratives such as deregulation and leverage. But can we diagnose and quantify the ongoing emergence of systemic risk in financial systems? In this study, we focus on two previously-documented measures of systemic risk that require only easily available time series data (eg monthly asset returns): cross-correlation and principal component analysis. We apply these tests to daily and monthly returns on hedge fund indexes and broad-based market indexes, and discuss their results. We hope that a frank discussion of these simple, non-parametric measures can help inform legislators, lawmakers, and financial actors of potential crises looming on the horizon.

Dynamic industry uncertainty networks and the business cycle
Jozef Barunik,Mattia Bevilacqua,Robert Faff

This paper introduces new forward-looking uncertainty network measures built from the main US industries. We argue that this network structure extracted from options investors' expectations is meaningfully dynamic and contains valuable information relevant for business cycles. Classifying industries according to their contribution to system-related uncertainty across business cycles, we uncover an uncertainty hub role for the communications, industrials and information technology sectors, while shocks to materials, real estate and utilities do not propagate strongly across the network. We find that a dynamic ex-ante network of uncertainty is a useful predictor of business cycles especially when it is based on uncertainty hubs. The uncertainty network is found to behave counter-cyclically since a tighter network of industry uncertainty tends to associate with future business cycle contractions.

Equilibria and Systemic Risk in Saturated Networks
Leonardo Massai,Giacomo Como,Fabio Fagnani

We undertake a fundamental study of network equilibria modeled as solutions of fixed point equations for monotone linear functions with saturation nonlinearities. The considered model extends one originally proposed to study systemic risk in networks of financial institutions interconnected by mutual obligations and is one of the simplest continuous models accounting for shock propagation phenomena and cascading failure effects. It also characterizes Nash equilibria of constrained quadratic network games with strategic complementarities. We first derive explicit expressions for network equilibria and prove necessary and sufficient conditions for their uniqueness encompassing and generalizing results available in the literature. Then, we study jump discontinuities of the network equilibria when the exogenous flows cross certain regions of measure 0 representable as graphs of continuous functions. Finally, we discuss some implications of our results in the two main motivating applications. In financial networks, this bifurcation phenomenon is responsible for how small shocks in the assets of a few nodes can trigger major aggregate losses to the system and cause the default of several agents. In constrained quadratic network games, it induces a blow-up behavior of the sensitivity of Nash equilibria with respect to the individual benefits.

Exponential Kernels with Latency in Hawkes Processes: Applications in Finance
Marcos Costa Santos Carreira

The Tick library allows researchers in market microstructure to simulate and learn Hawkes process in high-frequency data, with optimized parametric and non-parametric learners. But one challenge is to take into account the correct causality of order book events considering latency: the only way one order book event can influence another is if the time difference between them (by the central order book timestamps) is greater than the minimum amount of time for an event to be (i) published in the order book, (ii) reach the trader responsible for the second event, (iii) influence the decision (processing time at the trader) and (iv) the 2nd event reach the order book and be processed. For this we can use exponential kernels shifted to the right by the latency amount. We derive the expression for the log-likelihood to be minimized for the 1-D and the multidimensional cases, and test this method with simulated data and real data. On real data we find that, although not all decays are the same, the latency itself will determine most of the decays. We also show how the decays are related to the latency. Code is available on GitHub at

Intra-Horizon Expected Shortfall and Risk Structure in Models with Jumps
Walter Farkas,Ludovic Mathys,Nikola Vasiljević

The present article deals with intra-horizon risk in models with jumps. Our general understanding of intra-horizon risk is along the lines of the approach taken in Boudoukh, Richardson, Stanton and Whitelaw (2004), Rossello (2008), Bhattacharyya, Misra and Kodase (2009), Bakshi and Panayotov (2010), and Leippold and Vasiljevi\'c (2019). In particular, we believe that quantifying market risk by strictly relying on point-in-time measures cannot be deemed a satisfactory approach in general. Instead, we argue that complementing this approach by studying measures of risk that capture the magnitude of losses potentially incurred at any time of a trading horizon is necessary when dealing with (m)any financial position(s). To address this issue, we propose an intra-horizon analogue of the expected shortfall for general profit and loss processes and discuss its key properties. Our intra-horizon expected shortfall is well-defined for (m)any popular class(es) of L\'evy processes encountered when modeling market dynamics and constitutes a coherent measure of risk, as introduced in Cheridito, Delbaen and Kupper (2004). On the computational side, we provide a simple method to derive the intra-horizon risk inherent to popular L\'evy dynamics. Our general technique relies on results for maturity-randomized first-passage probabilities and allows for a derivation of diffusion and single jump risk contributions. These theoretical results are complemented with an empirical analysis, where popular L\'evy dynamics are calibrated to S&P 500 index data and an analysis of the resulting intra-horizon risk is presented.

Quantification of Risk in Classical Models of Finance
Alois Pichler,Ruben Schlotter

This paper enhances the pricing of derivatives as well as optimal control problems to a level comprising risk. We employ nested risk measures to quantify risk, investigate the limiting behavior of nested risk measures within the classical models in finance and characterize existence of the risk-averse limit. As a result we demonstrate that the nested limit is unique, irrespective of the initially chosen risk measure. Within the classical models risk aversion gives rise to a stream of risk premiums, comparable to dividend payments. In this context we connect coherent risk measures with the Sharpe ratio from modern portfolio theory and extract the Z-spread -- a widely accepted quantity in economics to hedge risk. The results for European option pricing are then extended to risk-averse American options, where we study the impact of risk on the price as well as the optimal time to exercise the option. We also extend Merton's optimal consumption problem to the risk-averse setting.

Temporal Clustering of Disorder Events During the COVID-19 Pandemic
Gian Maria Campedelli,Maria Rita D'Orsogna

The COVID-19 pandemic has unleashed multiple public health, socio-economic, and institutional crises. Measures taken to slow the spread of the virus have fostered significant strain between authorities and citizens, leading to waves of social unrest and anti-government demonstrations. We study the temporal nature of pandemic-related disorder events as tallied by the "COVID-19 Disorder Tracker" initiative by focusing on the three countries with the largest number of incidents, India, Israel, and Mexico. By fitting Poisson and Hawkes processes to the stream of data, we find that disorder events are inter-dependent and self-excite in all three countries. Geographic clustering confirms these features at the subnational level, indicating that nationwide disorders emerge as the convergence of meso-scale patterns of self-excitation. Considerable diversity is observed among countries when computing correlations of events between subnational clusters; these are discussed in the context of specific political, societal and geographic characteristics. Israel, the most territorially compact and where large scale protests were coordinated in response to government lockdowns, displays the largest reactivity and the shortest period of influence following an event, as well as the strongest nationwide synchrony. In Mexico, where complete lockdown orders were never mandated, reactivity and nationwide synchrony are lowest. Our work highlights the need for authorities to promote local information campaigns to ensure that livelihoods and virus containment policies are not perceived as mutually exclusive.

The Impact of Digital Marketing on Sausage Manufacturing Companies in the Altos of Jalisco
Guillermo Jose Navarro del Toro

One of the goals of any business, in addition to producing high-quality, community-accepted products, is to significantly increase sales. Unfortunately, there are regions where new marketing technologies that make it possible to reach a larger number of potential consumers, not only at the regional level, but also at the state and national level, are not yet used. This research, which included qualitative and quantitative methods, as well as interviews applied to owners, employees and clients of three sausage companies, seeks to measure the impact of digital marketing in the Altos of Jalisco, Mexico. Thus, in addition to inquiring about the degree of knowledge they have regarding information and communication technologies (ICT) to expand their markets to areas with higher population density, another goal is to know the opinion about their manufactured products, their quality and acceptance. It should not be forgotten that companies are moving to an increasingly connected world, which enables entrepreneurs to get their products to a greater number of consumers through the Internet and smart devices, such as cell phones, tablets and computers; and thus ensure the survival of the company and a longer stay in the market.

Towards a more sustainable academic publishing system
Mohsen Kayal,Jane Ballard,Ehsan Kayal

Communicating new scientific discoveries is key to human progress. Yet, this endeavor is hindered by monetary restrictions for publishing one's findings and accessing other scientists' reports. This process is further exacerbated by a large portion of publishing media owned by private, for-profit companies that do not reinject academic publishing benefits into the scientific community, in contrast with journals from scientific societies. As the academic world is not exempt from economic crises, new alternatives are necessary to support a fair publishing system for society. After summarizing the general issues of academic publishing today, we present several solutions at the levels of the individual scientist, the scientific community, and the publisher towards more sustainable scientific publishing. By providing a voice to the many scientists who are fundamental protagonists, yet often powerless witnesses, of the academic publishing system, and a roadmap for implementing solutions, this initiative can spark increased awareness and promote shifts towards impactful practices.

Visual Analytics approach for finding spatiotemporal patterns from COVID19
Arunav Das

Bounce Back Loan is amongst a number of UK business financial support schemes launched by UK Government in 2020 amidst pandemic lockdown. Through these schemes, struggling businesses are provided financial support to weather economic slowdown from pandemic lockdown. {\pounds}43.5bn loan value has been provided as of 17th Dec2020. However, with no major checks for granting these loans and looming prospect of loan losses from write-offs from failed businesses and fraud, this paper theorizes prospect of applying spatiotemporal modelling technique to explore if geospatial patterns and temporal analysis could aid design of loan grant criteria for schemes. Application of Clustering and Visual Analytics framework to business demographics, survival rate and Sector concentration shows Inner and Outer London spatial patterns which historic business failures and reversal of the patterns under COVID-19 implying sector influence on spatial clusters. Combination of unsupervised clustering technique with multinomial logistic regression modelling on research datasets complimented by additional datasets on other support schemes, business structure and financial crime, is recommended for modelling business vulnerability to certain types of financial market or economic condition. The limitations of clustering technique for high dimensional is discussed along with relevance of an applicable model for continuing the research through next steps.