# Research articles for the 2019-08-04

arXiv

The aim of this paper is to introduce a synthetic ALM model that catches the main specificity of life insurance contracts. First, it keeps track of both market and book values to apply the regulatory profit sharing rule. Second, it introduces a determination of the crediting rate to policyholders that is close to the practice and is a trade-off between the regulatory rate, a competitor rate and the available profits. Third, it considers an investment in bonds that enables to match a part of the cash outflow due to surrenders, while avoiding to store the trading history. We use this model to evaluate the Solvency Capital Requirement (SCR) with the standard formula, and show that the choice of the interest rate model is important to get a meaningful model after the regulatory shocks on the interest rate. We discuss the different values of the SCR modules first in a framework with moderate interest rates using the shocks of the present legislation, and then we consider a low interest framework with the latest recommandation of the EIOPA on the shocks. In both cases, we illustrate the importance of matching cash-flows and its impact on the SCR.

arXiv

We consider the problem of fast time-series data clustering. Building on previous work modeling the correlation-based Hamiltonian of spin variables we present a fast non-expensive agglomerative algorithm. The method is tested on synthetic correlated time-series and noisy synthetic data-sets with built-in cluster structure to demonstrate that the algorithm produces meaningful non-trivial results. We argue that ASPC can reduce compute time costs and resource usage cost for large scale clustering while being serialized and hence has no obvious parallelization requirement. The algorithm can be an effective choice for state-detection for online learning in a fast non-linear data environment because the algorithm requires no prior information about the number of clusters.

arXiv

The detection of fraud in accounting data is a long-standing challenge in financial statement audits. Nowadays, the majority of applied techniques refer to handcrafted rules derived from known fraud scenarios. While fairly successful, these rules exhibit the drawback that they often fail to generalize beyond known fraud scenarios and fraudsters gradually find ways to circumvent them. In contrast, more advanced approaches inspired by the recent success of deep learning often lack seamless interpretability of the detected results. To overcome this challenge, we propose the application of adversarial autoencoder networks. We demonstrate that such artificial neural networks are capable of learning a semantic meaningful representation of real-world journal entries. The learned representation provides a holistic view on a given set of journal entries and significantly improves the interpretability of detected accounting anomalies. We show that such a representation combined with the networks reconstruction error can be utilized as an unsupervised and highly adaptive anomaly assessment. Experiments on two datasets and initial feedback received by forensic accountants underpinned the effectiveness of the approach.

arXiv

We propose to solve large scale Markowitz mean-variance (MV) portfolio allocation problem using reinforcement learning (RL). By adopting the recently developed continuous-time exploratory control framework, we formulate the exploratory MV problem in high dimensions. We further show the optimality of a multivariate Gaussian feedback policy, with time-decaying variance, in trading off exploration and exploitation. Based on a provable policy improvement theorem, we devise a scalable and data-efficient RL algorithm and conduct large scale empirical tests using data from the S&P 500 stocks. We found that our method consistently achieves over 10% annualized returns and it outperforms econometric methods and the deep RL method by large margins, for both long and medium terms of investment with monthly and daily trading.

arXiv

We consider an auction market in which market makers fill the order book during a given time period while some other investors send market orders. We define the clearing price of the auction as the price maximizing the exchanged volume at the clearing time according to the supply and demand of each market participants. Then we derive in a semi-explicit form the error made between this clearing price and the fundamental price as a function of the auction duration. We study the impact of the behavior of market takers on this error. To do so we consider the case of naive market takers and that of rational market takers playing a Nash equilibrium to minimize their transaction costs. We compute the optimal duration of the auctions for 77 stocks traded on Euronext and compare the quality of price formation process under this optimal value to the case of a continuous limit order book. Continuous limit order books are found to be usually sub-optimal. However, in term of our metric, they only moderately impair the quality of price formation process. Order of magnitude of optimal auction durations is from 2 to 10 minutes.