Research articles for the 2021-01-10
arXiv
The paper introduces a very simple and fast computation method for high-dimensional integrals to solve high-dimensional Kolmogorov partial differential equations (PDEs). The new machine learning-based method is obtained by solving a stochastic weighted minimization with stochastic gradient descent which is inspired by a high-order weak approximation scheme for stochastic differential equations (SDEs) with Malliavin weights. Then solutions to high-dimensional Kolmogorov PDEs or expectations of functionals of solutions to high-dimensional SDEs are accurately approximated without suffering from the curse of dimensionality. Numerical examples for PDEs and SDEs up to 100 dimensions are shown by using second and third-order discretization schemes in order to demonstrate the effectiveness of our method.
arXiv
The information dynamics in finance and insurance applications is usually modeled by a filtration. This paper looks at situations where information restrictions apply such that the information dynamics may become non-monotone. A fundamental tool for calculating and managing risks in finance and insurance are martingale representations. We present a general theory that extends classical martingale representations to non-monotone information generated by marked point processes. The central idea is to focus only on those properties that martingales and compensators show on infinitesimally short intervals. While classical martingale representations describe innovations only, our representations have an additional symmetric counterpart that quantifies the effect of information loss. We exemplify the results with examples from life insurance and credit risk.
arXiv
Adversarial samples have drawn a lot of attention from the Machine Learning community in the past few years. An adverse sample is an artificial data point coming from an imperceptible modification of a sample point aiming at misleading. Surprisingly, in financial research, little has been done in relation to this topic from a concrete trading point of view. We show that those adversarial samples can be implemented in a trading environment and have a negative impact on certain market participants. This could have far reaching implications for financial markets either from a trading or a regulatory point of view.
arXiv
Economic activities favor mutual geographical proximity and concentrate spatially to form cities. In a world of diminishing transport costs, however, the advantage of physical proximity is fading, and the role of cities in the economy may be declining. To provide insights into the long-run evolution of cities, we analyzed Japan's census data over the 1970--2015 period. We found that fewer and larger cities thrived at the national scale, suggesting an eventual mono-centric economy with a single megacity; simultaneously, each larger city flattened out at the local scale, suggesting an eventual extinction of cities. We interpret this multi-scale phenomenon as an instance of pattern formation by self-organization, which is widely studied in mathematics and biology. However, cities' dynamics are distinct from mathematical or biological mechanisms because they are governed by economic interactions mediated by transport costs between locations. Our results call for the synthesis of knowledge in mathematics, biology, and economics to open the door for a general pattern formation theory that is applicable to socioeconomic phenomena.
arXiv
A new method for stochastic control based on neural networks and using randomisation of discrete random variables is proposed and applied to optimal stopping time problems. The method models directly the policy and does not need the derivation of a dynamic programming principle nor a backward stochastic differential equation. Unlike continuous optimization where automatic differentiation is used directly, we propose a likelihood ratio method for gradient computation. Numerical tests are done on the pricing of American and swing options. The proposed algorithm succeeds in pricing high dimensional American and swing options in a reasonable computation time, which is not possible with classical algorithms.
arXiv
We propose deep neural network algorithms to calculate efficient frontier in some Mean-Variance and Mean-CVaR portfolio optimization problems. We show that we are able to deal with such problems when both the dimension of the state and the dimension of the control are high. Adding some additional constraints, we compare different formulations and show that a new projected feedforward network is able to deal with some global constraints on the weights of the portfolio while outperforming classical penalization methods. All developed formulations are compared in between. Depending on the problem and its dimension, some formulations may be preferred.
arXiv
This paper investigates the effects of introducing external medical review for disability insurance (DI) in a system relying on treating physician testimony for eligibility determination. Using a unique policy change and administrative data from Switzerland, I show that medical review reduces DI incidence by 23%. Incidence reductions are closely tied to difficult-to-diagnose conditions, suggesting inaccurate assessments by treating physicians. Due to a partial benefit system, reductions in full benefit awards are partly offset by increases in partial benefits. More intense screening also increases labor market participation. Existing benefit recipients are downgraded and lose part of their benefit income when scheduled medical reviews occur. Back-of-the-envelope calculations indicate that external medical review is highly cost-effective. Under additional assumptions, the results provide a lower bound of the effect on the false positive award error rate.
arXiv
One of the exciting recent developments in decentralized finance (DeFi) has been the development of decentralized cryptocurrency exchanges that can autonomously handle conversion between different cryptocurrencies. Decentralized exchange protocols such as Uniswap, Curve and other types of Automated Market Makers (AMMs) maintain a liquidity pool (LP) of two or more assets constrained to maintain at all times a mathematical relation to each other, defined by a given function or curve. Examples of such functions are the constant-sum and constant-product AMMs. Existing systems however suffer from several challenges. They require external arbitrageurs to restore the price of tokens in the pool to match the market price. Such activities can potentially drain resources from the liquidity pool. In particular, dramatic market price changes can result in low liquidity with respect to one or more of the assets and reduce the total value of the LP. We propose in this work a new approach to constructing the AMM by proposing the idea of dynamic curves. It utilizes input from a market price oracle to modify the mathematical relationship between the assets so that the pool price continuously and automatically adjusts to be identical to the market price. This approach eliminates arbitrage opportunities and, as we show through simulations, maintains liquidity in the LP for all assets and the total value of the LP over a wide range of market prices.
arXiv
Among industrialized countries, U.S. holds two somehow inglorious records: the highest rate of fatal police shootings and the highest rate of deaths related to firearms. The latter has been associated with strong diffusion of firearms ownership largely due to loose legislation in several member states. The present paper investigates the relation between firearms legislation\diffusion and the number of fatal police shooting episodes using a seven-year panel dataset. While our results confirm the negative impact of stricter firearms regulations found in previous cross-sectional studies, we find that the diffusion of guns ownership has no statistically significant effect. Furthermore, regulations pertaining to the sphere of gun owner accountability seem to be the most effective in reducing fatal police shootings.
arXiv
In this paper, an approximate version of the Barndorff-Nielsen and Shephard model, driven by a Brownian motion and a L\'evy subordinator, is formulated. The first-exit time of the log-return process for this model is analyzed. It is shown that with certain probability, the first-exit time process of the log-return is decomposable into the sum of the first exit time of the Brownian motion with drift, and the first exit time of a L\'evy subordinator with drift. Subsequently, the probability density functions of the first exit time of some specific L\'evy subordinators, connected to stationary, self-decomposable variance processes, are studied. Analytical expressions of the probability density function of the first-exit time of three such L\'evy subordinators are obtained in terms of various special functions. The results are implemented to empirical S&P 500 dataset.
arXiv
This paper applies a recurrent neural network (RNN) method to forecast cotton and oil prices. We show how these new tools from machine learning, particularly Long-Short Term Memory (LSTM) models, complement traditional methods. Our results show that machine learning methods fit reasonably well the data but do not outperform systematically classical methods such as Autoregressive Integrated Moving Average (ARIMA) models in terms of out of sample forecasts. However, averaging the forecasts from the two type of models provide better results compared to either method. Compared to the ARIMA and the LSTM, the Root Mean Squared Error (RMSE) of the average forecast was 0.21 and 21.49 percent lower respectively for cotton. For oil, the forecast averaging does not provide improvements in terms of RMSE. We suggest using a forecast averaging method and extending our analysis to a wide range of commodity prices.
arXiv
A planner aims to target individuals who exceed a threshold in a characteristic, such as wealth or ability. The individuals can rank their friends according to the characteristic. We study a strategy-proof mechanism for the planner to use the rankings for targeting. We discuss how the mechanism works in practice, when the rankings may contain errors.
arXiv
The paper proposes a computational adaptation of the principles underlying principal component analysis with agent based simulation in order to produce a novel modeling methodology for financial time series and financial markets. Goal of the proposed methodology is to find a reduced set of investor s models (agents) which is able to approximate or explain a target financial time series. As computational testbed for the study, we choose the learning system L FABS which combines simulated annealing with agent based simulation for approximating financial time series. We will also comment on how L FABS s architecture could exploit parallel computation to scale when dealing with massive agent simulations. Two experimental case studies showing the efficacy of the proposed methodology are reported.
arXiv
The liquidity risk factor of security market plays an important role in the formulation of trading strategies. A more liquid stock market means that the securities can be bought or sold more easily. As a sound indicator of market liquidity, the transaction duration is the focus of this study. We concentrate on estimating the probability density function p({\Delta}t_(i+1) |G_i) where {\Delta}t_(i+1) represents the duration of the (i+1)-th transaction, G_i represents the historical information at the time when the (i+1)-th transaction occurs. In this paper, we propose a new ultra-high-frequency (UHF) duration modelling framework by utilizing long short-term memory (LSTM) networks to extend the conditional mean equation of classic autoregressive conditional duration (ACD) model while retaining the probabilistic inference ability. And then the attention mechanism is leveraged to unveil the internal mechanism of the constructed model. In order to minimize the impact of manual parameter tuning, we adopt fixed hyperparameters during the training process. The experiments applied to a large-scale dataset prove the superiority of the proposed hybrid models. In the input sequence, the temporal positions which are more important for predicting the next duration can be efficiently highlighted via the added attention mechanism layer.
arXiv
We provide an explicit characterization of the optimal market making strategy in a discrete-time Limit Order Book (LOB). In our model, the number of filled orders during each period depends linearly on the distance between the fundamental price and the market maker's limit order quotes, with random slope and intercept coefficients. The high-frequency market maker (HFM) incurs an end-of-the-day liquidation cost resulting from linear price impact. The optimal placement strategy incorporates in a novel and parsimonious way forecasts about future changes in the asset's fundamental price. We show that the randomness in the demand slope reduces the inventory management motive, and that a positive correlation between demand slope and investors' reservation prices leads to wider spreads. Our analysis reveals that the simultaneous arrival of buy and sell market orders (i) reduces the shadow cost of inventory, (ii) leads the HFM to reduce price pressures to execute larger flows, and (iii) introduces patterns of nonlinearity in the intraday dynamics of bid and ask spreads. Our empirical study shows that the market making strategy outperforms those which ignores randomness in demand, simultaneous arrival of buy and sell market orders, and local drift in the fundamental price.
arXiv
We investigate the portfolio execution problem under a framework in which volatility and liquidity are both uncertain. In our model, we assume that a multidimensional Markovian stochastic factor drives both of them. Moreover, we model indirect liquidity costs as temporary price impact, stipulating a power law to relate it to the agent's turnover rate. We first analyze the regularized setting, in which the admissible strategies do not ensure complete execution of the initial inventory. We prove the existence and uniqueness of a continuous and bounded viscosity solution of the Hamilton-Jacobi-Bellman equation, whence we obtain a characterization of the optimal trading rate. As a byproduct of our proof, we obtain a numerical algorithm. Then, we analyze the constrained problem, in which admissible strategies must guarantee complete execution to the trader. We solve it through a monotonicity argument, obtaining the optimal strategy as a singular limit of the regularized counterparts.
arXiv
We extend the Annually Recalculated Virtual Annuity (ARVA) spending rule for retirement savings decumulation to include a cap and a floor on withdrawals. With a minimum withdrawal constraint, the ARVA strategy runs the risk of depleting the investment portfolio. We determine the dynamic asset allocation strategy which maximizes a weighted combination of expected total withdrawals (EW) and expected shortfall (ES), defined as the average of the worst five per cent of the outcomes of real terminal wealth. We compare the performance of our dynamic strategy to simpler alternatives which maintain constant asset allocation weights over time accompanied by either our same modified ARVA spending rule or withdrawals that are constant over time in real terms. Tests are carried out using both a parametric model of historical asset returns as well as bootstrap resampling of historical data. Consistent with previous literature that has used different measures of reward and risk than EW and ES, we find that allowing some variability in withdrawals leads to large improvements in efficiency. However, unlike the prior literature, we also demonstrate that further significant enhancements are possible through incorporating a dynamic asset allocation strategy rather than simply keeping asset allocation weights constant throughout retirement.
arXiv
Portfolio optimization is one of the most attentive fields that have been researched with machine learning approaches. Many researchers attempted to solve this problem using deep reinforcement learning due to its efficient inherence that can handle the property of financial markets. However, most of them can hardly be applicable to real-world trading since they ignore or extremely simplify the realistic constraints of transaction costs. These constraints have a significantly negative impact on portfolio profitability. In our research, a conservative level of transaction fees and slippage are considered for the realistic experiment. To enhance the performance under those constraints, we propose a novel Deterministic Policy Gradient with 2D Relative-attentional Gated Transformer (DPGRGT) model. Applying learnable relative positional embeddings for the time and assets axes, the model better understands the peculiar structure of the financial data in the portfolio optimization domain. Also, gating layers and layer reordering are employed for stable convergence of Transformers in reinforcement learning. In our experiment using U.S. stock market data of 20 years, our model outperformed baseline models and demonstrated its effectiveness.
arXiv
Storage of electricity has become increasingly important, due to the gradual replacement of fossil fuels by more variable and uncertain renewable energy sources. In this paper, we provide details on how to mathematically formalize a corresponding electricity storage contract, taking into account the physical limitations of a storage facility and the operational constraints of the electricity grid. We give details of a valuation technique to price these contracts, where the electricity prices follow a structural model based on a stochastic polynomial process. In particular, we show that the Fourier-based COS method can be used to price the contracts accurately and efficiently.