# Research articles for the 2019-09-22

arXiv

We propose a method for detection and prediction of native and synthetic iceberg orders on Chicago Mercantile Exchange. Native (managed by the exchange) icebergs are detected using discrepancies between the resting volume of an order and the actual trade size as indicated by trade summary messages, as well as by tracking order modifications that follow trade events. Synthetic (managed by market participants) icebergs are detected by observing limit orders arriving within a short time frame after a trade. The obtained icebergs are then used to train a model based on the Kaplan--Meier estimator, accounting for orders that were cancelled after a partial execution. The model is utilized to predict the total size of newly detected icebergs. Out of sample validation is performed on the full order depth data, performance metrics and quantitative estimates of hidden volume are presented.

arXiv

We study identification and estimation of causal effects of a binary treatment in settings with panel data. We highlight that there are two paths to identification in the presence of unobserved confounders. First, the conventional path based on making assumptions on the relation between the potential outcomes and the unobserved confounders. Second, a design-based path where assumptions are made about the relation between the treatment assignment and the confounders. We introduce different sets of assumptions that follow the two paths, and develop double robust approaches to identification where we exploit both approaches, similar in spirit to the double robust approaches to estimation in the program evaluation literature.

arXiv

Market economy closely connects aspects to all walks of life. The stock forecast is one of task among studies on the market economy. However, information on markets economy contains a lot of noise and uncertainties, which lead economy forecasting to become a challenging task. Ensemble learning and deep learning are the most methods to solve the stock forecast task. In this paper, we present a model combining the advantages of two methods to forecast the change of stock price. The proposed method combines CNN and GBoost. The experimental results on six market indexes show that the proposed method has better performance against current popular methods.

arXiv

We consider the problem of designing a derivatives exchange aiming at addressing clients needs in terms of listed options and providing suitable liquidity. We proceed into two steps. First we use a quantization method to select the options that should be displayed by the exchange. Then, using a principal-agent approach, we design a make take fees contract between the exchange and the market maker. The role of this contract is to provide incentives to the market maker so that he offers small spreads for the whole range of listed options, hence attracting transactions and meeting the commercial requirements of the exchange.

arXiv

Why are we good? Why are we bad? Questions regarding the evolution of morality have spurred an astoundingly large interdisciplinary literature. Some significant subset of this body of work addresses questions regarding our moral psychology: how did humans evolve the psychological properties which underpin our systems of ethics and morality? Here I do three things. First, I discuss some methodological issues, and defend particularly effective methods for addressing many research questions in this area. Second, I give an in-depth example, describing how an explanation can be given for the evolution of guilt---one of the core moral emotions---using the methods advocated here. Last, I lay out which sorts of strategic scenarios generally are the ones that our moral psychology evolved to `solve', and thus which models are the most useful in further exploring this evolution.

arXiv

This paper studies the optimal dividend for a multi-line insurance group, in which each insurance company is exposed to some external credit default risk. The external default contagion is considered in the sense that one default event can affect the default probabilities of all surviving insurance companies. The total dividend problem is formulated for the insurance group and we reveal for the first time that the optimal singular dividend strategy is still of the barrier type. Furthermore, we show that the optimal barrier for each insurance company is modulated by the current default state, namely how many and which companies have defaulted will determine the dividend threshold for each surviving company. These interesting conclusions match with observations from the real market and are based on our analysis of the associated recursive system of Hamilton-Jacobi-Bellman variational inequalities (HJBVIs), which is new to the literature. The existence of classical solution is established and the rigorous proof of the verification theorem is provided. For the case of two companies, the value function and optimal barriers for each company can be explicitly constructed.

arXiv

A new framework for portfolio diversification is introduced which goes beyond the classical mean-variance approach and portfolio allocation strategies such as risk parity. It is based on a novel concept called portfolio dimensionality that connects diversification to the non-Gaussianity of portfolio returns and can typically be defined in terms of the ratio of risk measures which are homogenous functions of equal degree. The latter arises naturally due to our requirement that diversification measures should be leverage invariant. We introduce this new framework and argue the benefits relative to existing measures of diversification in the literature, before addressing the question of optimizing diversification or, equivalently, dimensionality. Maximising portfolio dimensionality leads to highly non-trivial optimization problems with objective functions which are typically non-convex and potentially have multiple local optima. Two complementary global optimization algorithms are thus presented. For problems of moderate size and more akin to asset allocation problems, a deterministic Branch and Bound algorithm is developed, whereas for problems of larger size a stochastic global optimization algorithm based on Gradient Langevin Dynamics is given. We demonstrate analytically and through numerical experiments that the framework reflects the desired properties often discussed in the literature.

arXiv

In this thesis, we develop a comprehensive account of the expressive power, modelling efficiency, and performance advantages of so-called trading agents (i.e., Deep Soft Recurrent Q-Network (DSRQN) and Mixture of Score Machines (MSM)), based on both traditional system identification (model-based approach) as well as on context-independent agents (model-free approach). The analysis provides conclusive support for the ability of model-free reinforcement learning methods to act as universal trading agents, which are not only capable of reducing the computational and memory complexity (owing to their linear scaling with the size of the universe), but also serve as generalizing strategies across assets and markets, regardless of the trading universe on which they have been trained. The relatively low volume of daily returns in financial market data is addressed via data augmentation (a generative approach) and a choice of pre-training strategies, both of which are validated against current state-of-the-art models. For rigour, a risk-sensitive framework which includes transaction costs is considered, and its performance advantages are demonstrated in a variety of scenarios, from synthetic time-series (sinusoidal, sawtooth and chirp waves), simulated market series (surrogate data based), through to real market data (S\&P 500 and EURO STOXX 50). The analysis and simulations confirm the superiority of universal model-free reinforcement learning agents over current portfolio management model in asset allocation strategies, with the achieved performance advantage of as much as 9.2\% in annualized cumulative returns and 13.4\% in annualized Sharpe Ratio.

arXiv

This systemic risk paper introduces inhomogeneous random financial networks (IRFNs). Such models are intended to describe parts, or the entirety, of a highly heterogeneous network of banks and their interconnections, in the global financial system. Both the balance sheets and the stylized crisis behaviour of banks are ingredients of the network model. A systemic crisis is pictured as triggered by a shock to banks' balance sheets, which then leads to the propagation of damaging shocks and the potential for amplification of the crisis, ending with the system in a cascade equilibrium. Under some conditions the model has ``locally tree-like independence (LTI)'', where a general percolation theoretic argument leads to an analytic fixed point equation describing the cascade equilibrium when the number of banks $N$ in the system is taken to infinity. This paper focusses on mathematical properties of the framework in the context of Eisenberg-Noe solvency cascades generalized to account for fractional bankruptcy charges. New results including a definition and proof of the ``LTI property'' of the Eisenberg-Noe solvency cascade mechanism lead to explicit $N=\infty$ fixed point equations that arise under very general model specifications. The essential formulas are shown to be implementable via well-defined approximation schemes, but numerical exploration of some of the wide range of potential applications of the method is left for future work.

arXiv

We develop a general model for finding the optimal penal strategy based on the behavioral traits of the offenders. We focus on how the discount rate (level of time discounting) affects criminal propensity on the individual level, and how the aggregation of these effects influences criminal activities on the population level. The effects are aggregated based on the distribution of discount rate among the population. We study this distribution empirically through a survey with 207 participants, and we show that it follows zero-inflated exponential distribution. We quantify the effectiveness of the penal strategy as its net utility for the population, and show how this quantity can be maximized. When we apply the maximization procedure on the offense of impaired driving (DWI), we discover that the effectiveness of DWI deterrence depends critically on the amount of fine and prison condition.