# Research articles for the 2020-11-22

arXiv

We provide an economically sound micro-foundation to linear price impact models, by deriving them as the equilibrium of a suitable agent-based system. Our setup generalizes the well-known Kyle model, by dropping the assumption of a terminal time at which fundamental information is revealed so to describe a stationary market, while retaining agents' rationality and asymmetric information. We investigate the stationary equilibrium for arbitrary Gaussian noise trades and fundamental information, and show that the setup is compatible with universal price diffusion at small times, and non-universal mean-reversion at time scales at which fluctuations in fundamentals decay. Our model provides a testable relation between volatility of prices, magnitude of fluctuations in fundamentals and level of volume traded in the market.

arXiv

This study deals with the pricing and hedging of single-tranche collateralized debt obligations (STCDOs). We specify an affine two-factor model in which a catastrophic risk component is incorporated. Apart from being analytically tractable, this model has the feature that it captures the dynamics of super-senior tranches, thanks to the catastrophic component. We estimate the factor model based on the iTraxx Europe data with six tranches and four different maturities, using a quasi-maximum likelihood (QML) approach in conjunction with the Kalman filter. We derive the model-based variance-minimizing strategy for the hedging of STCDOs with a dynamically rebalanced portfolio on the underlying swap index. We analyze the actual performance of the variance-minimizing hedge on the iTraxx Europe data. In order to assess the hedging performance further, we run a simulation analysis where normal and extreme loss scenarios are generated via the method of importance sampling. Both in-sample hedging and simulation analysis suggest that the variance-minimizing strategy is most effective for mezzanine tranches in terms of yielding less riskier hedging portfolios and it fails to provide adequate hedge performance regarding equity tranches.

arXiv

The central idea of the paper is to present a general simple patchwork construction principle for multivariate copulas that create unfavourable VaR (i.e. Value at Risk) scenarios while maintaining given marginal distributions. This is of particular interest for the construction of Internal Models in the insurance industry under Solvency II in the European Union.

arXiv

This paper studies the impact of an university opening on incentives for human capital accumulation of prospective students in its neighborhood. The opening causes an exogenous fall on the cost to attend university, through the decrease in distance, leading to an incentive to increase effort - shown by the positive effect on students' grades. I use an event study approach with two-way fixed effects to retrieve a causal estimate, exploiting the variation across groups of students that receive treatment at different times - mitigating the bias created by the decision of governments on the location of new universities. Results show an average increase of $0.038$ standard deviations in test grades, for the municipality where the university was established, and are robust to a series of potential problems, including some of the usual concerns in event study models.

arXiv

We explore reinforcement learning methods for finding the optimal policy in the linear quadratic regulator (LQR) problem. In particular, we consider the convergence of policy gradient methods in the setting of known and unknown parameters. We are able to produce a global linear convergence guarantee for this approach in the setting of finite time horizon and stochastic state dynamics under weak assumptions. The convergence of a projected policy gradient method is also established in order to handle problems with constraints. We illustrate the performance of the algorithm with two examples. The first example is the optimal liquidation of a holding in an asset. We show results for the case where we assume a model for the underlying dynamics and where we apply the method to the data directly. The empirical evidence suggests that the policy gradient method can learn the global optimal solution for a larger class of stochastic systems containing the LQR framework and that it is more robust with respect to model mis-specification when compared to a model-based approach. The second example is an LQR system in a higher dimensional setting with synthetic data.

arXiv

We introduce a first theory of price impact in presence of an interest-rates term structure. We explain how one can formulate instantaneous and transient price impact on bonds with different maturities, including a cross price impact that is endogenous to the term structure. We connect the introduced impact to classic no-arbitrage theory for interest rate markets, showing that impact can be embedded in the pricing measure and that no-arbitrage can be preserved. We present pricing examples in presence of price impact and numerical examples of how impact changes the shape of the term structure. Finally, to show that our approach is applicable we solve an optimal execution problem in interest rate markets with the type of price impact we developed in the paper.

arXiv

There is a growing interest in the integration of energy infrastructures to increase systems' flexibility and reduce operational costs. The most studied case is the synergy between electric and heating networks. Even though integrated heat and power markets can be described by a convex optimization problem, prices derived from dual values do not guarantee cost recovery. In this work, a two-step approach is presented for the calculation of the optimal energy dispatch and prices. The proposed methodology guarantees cost-recovery for each of the energy vectors and revenue-adequacy for the integrated market.

arXiv

This paper studies the retirement decision, optimal investment and consumption strategies under habit persistence for an agent with the opportunity to design the retirement time. The optimization problem is formulated as an interconnected optimal stopping and stochastic control problem (Stopping-Control Problem) in a finite time horizon. The problem contains three state variables: wealth $x$, habit level $h$ and wage rate $w$. We aim to derive the retirement boundary of this "wealth-habit-wage" triplet $(x,h,w)$. The complicated dual relation is proposed and proved to convert the original problem to the dual one. We obtain the retirement boundary of the dual variables based on an obstacle-type free boundary problem. Using dual relation we find the retirement boundary of primal variables and feed-back forms of optimal strategies. We show that if the so-called "de facto wealth" exceeds a critical proportion of wage, it will be optimal for the agent to choose to retire immediately. In numerical applications, we show how "de facto wealth" determines the retirement decisions and optimal strategies. Moreover, we observe discontinuity at retirement boundary: investment proportion always jumps down upon retirement, while consumption may jump up or jump down, depending on the change of marginal utility. We also find that the agent with higher standard of life tends to work longer.

SSRN

We show that risk mitigating incentives dominate risk shifting incentives in fragile banks. Risk shifting could be particularly severe in banking since it is the most opaque industry and banks are one of the most leveraged corporations with very low skin in the game. To analyze this question, we exploit security trading by banks during financial crises, as banks can easily and quickly change their risk exposure within their security portfolio. However, in contrast with the risk shifting hypothesis, we find that less capitalized banks take relatively less risk after financial market stress shocks. We show this using the supervisory ISIN-bank-month level dataset from Italy with all securities for each bank. Our results are over and above capital regulation as we show lower reach-for-yield effects by less capitalized banks within government bonds (with zero risk weights) or within securities with the same rating and maturity in the same month (which determines regulatory capital). Effects are robust to controlling for the covariance with the existence portfolio, and less capitalized banks, if anything, reduce concentration risk. Further, effects are stronger when uncertainty is higher, despite that risk shifting motives may be then higher. Moreover, three separate tests â€" based on different accounting portfolios (trading book versus held to maturity), the distribution of capital and franchise value â€" suggest that bank own incentives, instead of supervision, are the main drivers. Results are confirmed if we consider other sources of balance sheet fragility and different measures of risk-taking. Finally, evidence from the recent COVID-19 shock corroborates findings from the Global Financial Crisis and the Euro Area Sovereign Crisis.

arXiv

We consider financial networks, where banks are connected by contracts such as debts or credit default swaps. We study the clearing problem in these systems: we want to know which banks end up in a default, and what portion of their liabilities can these defaulting banks fulfill. We analyze these networks in a sequential model where banks announce their default one at a time, and the system evolves in a step-by-step manner.

We first consider the reversible model of these systems, where banks may return from a default. We show that the stabilization time in this model can heavily depend on the ordering of announcements. However, we also show that there are systems where for any choice of ordering, the process lasts for an exponential number of steps before an eventual stabilization. We also show that finding the ordering with the smallest (or largest) number of banks ending up in default is an NP-hard problem. Furthermore, we prove that defaulting early can be an advantageous strategy for banks in some cases, and in general, finding the best time for a default announcement is NP-hard. Finally, we discuss how changing some properties of this setting affects the stabilization time of the process, and then use these techniques to devise a monotone model of the systems, which ensures that every network stabilizes eventually.

arXiv

Various poverty reduction strategies are being implemented in the pursuit of eliminating extreme poverty. One such strategy is increased access to microcredit in poor areas around the world. Microcredit, typically defined as the supply of small loans to underserved entrepreneurs that originally aimed at displacing expensive local money-lenders, has been both praised and criticized as a development tool (Banerjee et al., 2015b). This paper presents an analysis of heterogeneous impacts from increased access to microcredit using data from three randomised trials. In the spirit of recognising that in general the impact of a policy intervention varies conditional on an unknown set of factors, particular, we investigate whether heterogeneity presents itself as groups of winners and losers, and whether such subgroups share characteristics across RCTs. We find no evidence of impacts, neither average nor distributional, from increased access to microcredit on consumption levels. In contrast, the lack of average effects on profits seems to mask heterogeneous impacts. The findings are, however, not robust to the specific machine learning algorithm applied. Switching from the better performing Elastic Net to the worse performing Random Forest leads to a sharp increase in the variance of the estimates. In this context, methods to evaluate the relative performing machine learning algorithm developed by Chernozhukov et al. (2019) provide a disciplined way for the analyst to counter the uncertainty as to which algorithm to deploy.