Research articles for the 2019-03-26
arXiv
We develop a one-dimensional notion of affine processes under parameter uncertainty, which we call non-linear affine processes. This is done as follows: given a set of parameters for the process, we construct a corresponding non-linear expectation on the path space of continuous processes. By a general dynamic programming principle we link this non-linear expectation to a variational form of the Kolmogorov equation, where the generator of a single affine process is replaced by the supremum over all corresponding generators of affine processes with parameters in the parameter set. This non-linear affine process yields a tractable model for Knightian uncertainty, especially for modelling interest rates under ambiguity.
We then develop an appropriate Ito-formula, the respective term-structure equations and study the non-linear versions of the Vasicek and the Cox-Ingersoll-Ross (CIR) model. Thereafter we introduce the non-linear Vasicek-CIR model. This model is particularly suitable for modelling interest rates when one does not want to restrict the state space a priori and hence the approach solves this modelling issue arising with negative interest rates.
SSRN
We document the dynamics of primary municipal bond (muni) markets after severe natural disasters. We find that yields of muni issuance increase significantly in the first three months after disasters. Disasters have little effect on issuersâ credit risk but can temporarily reduce investorsâ demand, which is consistent with the salience theory of choice (Bordalo, Gennaioli, and Shleifer, 2012). Natural disasters significantly increase the proceeds from muni issuances. Reacting to the larger financing costs, muni issuers use shorter maturity and a less complex structure to offset the larger financing costs. The higher yields after disasters provide speculation opportunities.
SSRN
We study T. Coverâs rebalancing option (Ordentlich and Cover 1998) under discrete hindsight optimization in continuous time. The payoff in question is equal to the final wealth that would have accrued to a $1 deposit into the best of some finite set of (perhaps levered) rebalancing rules determined in hindsight. A rebalancing rule (or fixed-fraction betting scheme) amounts to fixing an asset allocation (i.e. 200% stocks and -100% bonds) and then continuously executing rebalancing trades to counteract allocation drift. Restricting the hindsight optimization to a small number of rebalancing rules (i.e. 2) has some advantages over the pioneering approach taken by Cover & Company in their brilliant theory of universal portfolios (1986, 1991, 1996, 1998), where oneâs on-line trading performance is benchmarked relative to the final wealth of the best unlevered rebalancing rule of any kind in hindsight. Our approach lets practitioners express an a priori view that one of the favored asset allocations (âbetsâ) b â {b1,...,bn} will turn out to have performed spectacularly well in hindsight. In limiting our robustness to some discrete set of asset allocations (rather than all possible asset allocations) we reduce the price of the rebalancing option and guarantee to achieve a correspondingly higher percentage of the hindsight-optimized wealth at the end of the planning period. A practitioner who lives to delta-hedge this variant of Coverâs rebalancing option through several decades is guaranteed to see the day that his realized compound-annual capital growth rate is very close to that of the best of the discrete set of rebalancing rules in hindsight. Hence the point of the rock-bottom option price.
SSRN
I study how mutual fund managers affect asset prices when they face fund flow risk and liquidity risk. Sudden and large fund outflows may force managers to liquidate their asset holdings even when the liquidating cost is high. I present an equilibrium model with fund flow risk and liquidity risk, with two types of investors: delegated managers and direct investors. The model implies that expected returns are driven by two factors: 1) co-movement of net returns, i.e., returns net of liquidity costs, with the net market returns (liquidity-adjusted market beta) and 2) co-movement of net returns with the aggregate innovations in fund flows (fund flow beta). I empirically test the model and find that the implied stochastic discount factor explains the average returns of 50 size, book-to-market, liquidity, and flow beta portfolios jointly and separately. In fact, the fund flow beta subsumes the liquidity-adjusted market beta across different model specifications. Moreover, the magnitude of the price of risk for fund flow beta is very similar across the different sets of test assets, supporting the prediction that aggregate innovations in fund flows are an important component of the stochastic discount factor. Conditional on liquidity risk, the fund flow risk premium is 4.92% for illiquid stocks and 2.68% for liquid stocks annually.
arXiv
The predictive performance of point forecasts for a statistical functional, such as the mean, a quantile, or a certain risk measure, is commonly assessed in terms of scoring (or loss) functions. A scoring function should be (strictly) consistent for the functional of interest, that is, the expected score should be minimised by the correctly specified functional value. A functional is elicitable if it possesses a strictly consistent scoring function. In quantitative risk management, the elicitability of a risk measure is closely related to comparative backtesting procedures. As such, it has gained considerable interest in the debate about which risk measure to choose in practice. While this discussion has mainly focused on the dichotomy between Value at Risk (VaR) - a quantile - and Expected Shortfall (ES) - a tail expectation, this paper is concerned with Range Value at Risk (RVaR). RVaR can be regarded as an interpolation of VaR and ES, which constitutes a tradeoff between the sensitivity of the latter and the robustness of the former. Recalling that RVaR is not elicitable, we show that a triplet of RVaR with two VaR components at different levels is elicitable. We characterise the class of strictly consistent scoring functions. Moreover, additional properties of these scoring functions are examined, including the diagnostic tool of Murphy diagrams. The results are illustrated with a simulation study, and we put our approach in perspective with respect to the classical approach of trimmed least squares in robust regression.
arXiv
Various peer-to-peer energy markets have emerged in recent years in an attempt to manage distributed energy resources in a more efficient way. One of the main challenges these models face is how to create and allocate incentives to participants. Cooperative game theory offers a methodology to financially reward prosumers based on their contributions made to the local energy coalition using the Shapley value, but its high computational complexity limits the size of the game. This paper explores a stratified sampling method proposed in existing literature for Shapley value estimation, and modifies the method for a peer-to-peer cooperative game to improve its scalability. Finally, selected case studies verify the effectiveness of the proposed coalitional stratified random sampling method and demonstrate results from large games.
arXiv
Among the various market structures under peer-to-peer energy sharing, one model based on cooperative game theory provides clear incentives for prosumers to collaboratively schedule their energy resources. The computational complexity of this model, however, increases exponentially with the number of participants. To address this issue, this paper proposes the application of K-means clustering to the energy profiles following the grand coalition optimization. The cooperative model is run with the "clustered players" to compute their payoff allocations, which are then further distributed among the prosumers within each cluster. Case studies show that the proposed method can significantly improve the scalability of the cooperative scheme while maintaining a high level of financial incentives for the prosumers.
SSRN
This experiment uses a Monte Carlo simulation designed to test whether the problems about the use of accounting identities are present in the model of Fazzari, Hubbard, and Petersen (1988). The Monte Carlo simulation creates 10,000 sets of randomly generated cash flows, Tobinâs Q, and an error term variables, which in turn shape an investments variable that depends on them. These two variables are also related through an accounting semi identity or accounting partial identity (API). OLS estimations verify that estimated coefficients do not represent reality. The closer the data are to the accounting identity, the less the regression will tell about the causal relation.
arXiv
This paper considers the problem of predicting the number of events that have occurred in the past, but which are not yet observed due to a delay. Such delayed events are relevant in predicting the future cost of warranties, pricing maintenance contracts, determining the number of unreported claims in insurance and in modeling the outbreak of diseases. Disregarding these unobserved events results in a systematic underestimation of the event occurrence process. Our approach puts emphasis on modeling the time between the occurrence and observation of the event, the so-called observation delay. We propose a granular model for the heterogeneity in this observation delay based on the occurrence day of the event and on calendar day effects in the observation process, such as weekday and holiday effects. We illustrate this approach on a European general liability insurance data set where the occurrence of an accident is reported to the insurer with delay.
arXiv
The granting process of all credit institutions rejects applicants who seem risky regarding the repayment of their debt. A credit score is calculated and associated with a cut-off value beneath which an applicant is rejected. Developing a new score implies having a learning dataset in which the response variable good/bad borrower is known, so that rejects are de facto excluded from the learning process. We first introduce the context and some useful notations. Then we formalize if this particular sampling has consequences on the score's relevance. Finally, we elaborate on methods that use not-financed clients' characteristics and conclude that none of these methods are satisfactory in practice using data from Cr\'edit Agricole Consumer Finance.
-----
Un syst\`eme d'octroi de cr\'edit peut refuser des demandes de pr\^et jug\'ees trop risqu\'ees. Au sein de ce syst\`eme, le score de cr\'edit fournit une valeur mesurant un risque de d\'efaut, valeur qui est compar\'ee \`a un seuil d'acceptabilit\'e. Ce score est construit exclusivement sur des donn\'ees de clients financ\'es, contenant en particulier l'information `bon ou mauvais payeur', alors qu'il est par la suite appliqu\'e \`a l'ensemble des demandes. Un tel score est-il statistiquement pertinent ? Dans cette note, nous pr\'ecisons et formalisons cette question et \'etudions l'effet de l'absence des non-financ\'es sur les scores \'elabor\'es. Nous pr\'esentons ensuite des m\'ethodes pour r\'eint\'egrer les non-financ\'es et concluons sur leur inefficacit\'e en pratique, \`a partir de donn\'ees issues de Cr\'edit Agricole Consumer Finance.
SSRN
Barras, Scaillet, Wermers (2010) propose the False Discovery Rate to separate skill (alpha) from luck in fund performance. Using simulations with parameters informed by the data, we find that this methodology is overly conservative and underestimates the proportion of nonzero-alpha funds. E.g., 65% of funds with economically large alphas of ±2% are misclassified as zero-alpha. This bias arises from the low signal-to-noise ratio in fund returns and the consequent low statistical power. Our results raise concerns regarding the FDRâs applicability in performance evaluation and other domains with low power, and can materially change its conclusion that most funds have zero alpha.
SSRN
I construct a long history of risk exposures from derivatives using detailed position-level data on interest rate and equity instruments to show that inconsistencies in regulation distort the hedging choices of US life insurers. I exploit a shift in the regulation that provides inconsistent incentives to hedge economically similar products due to differences in the sensitivity of regulatory capital to movements in interest rates. I show that hedging increases and becomes more sensitive to interest rate fluctuations for insurers that underwrite products that became risk sensitive under the new regulation. However, insurers that underwrite products that have similar economic exposures but no regulatory sensitivity to interest rates do not increase hedging but instead increase off-balance sheet transfers through reinsurance. Consistent with regulation limiting hedging choices, tighter regulatory constraints lead to lower hedging. Using data on collateral posted to counter-parties, I show that lower hedging is not due to collateral constraints. My findings have implications for the fragility of life insurers going forward as regulation interacts with monetary policy in a way that makes the framework insensitive when interest rates rise.
SSRN
Crowdlending has emerged in recent years as an innovative way to finance new ventures and small companies. However, digitalized funding is a new technology itself; therefore, it is prone to mispricing and inefficiencies. We investigate whether peer-to-peer crowdlending to businesses provides investors with returns consistent with the level of risk borne. By studying over 3000 loans mediated on 68 European platforms we show that the returns are inversely related to loansâ riskiness, suggesting that, on average, crowdfunded loans are mispriced. Our results have important implications for the debate about the role of regulation in FinTech.
SSRN
This study provides an in-depth analysis of how to estimate risk-neutral moments robustly. A simulation and an empirical study show that estimating risk- neutral moments presents a trade-off between (1) the bias of estimates caused by a limited strike price domain and (2) the variance of estimates induced by micro-structural noise. The best trade-off is offered by option-implied quantile moments estimated from a volatility surface interpolated with a local-linear kernel regression and extrapolated linearly. A similarly good trade-off is achieved by estimating regular central option-implied moments from a volatility surface interpolated with a cubic smoothing spline and flat extrapolation.
arXiv
We introduce a stacking version of the Monte Carlo algorithm in the context of option pricing. Introduced recently for aeronautic computations, this simple technique, in the spirit of current machine learning ideas, learns control variates by approximating Monte Carlo draws with some specified function. We describe the method from first principles and suggest appropriate fits, and show its efficiency to evaluate European and Asian Call options in constant and stochastic volatility models.
SSRN
The purpose of this paper is to investigate if energy block chain based cryptocurrencies can help diversify equity portfolios consisting primarily of leading energy companies in the US S&P Composite 1500 Energy Index. The key contributions are firstly, in terms of assessing the importance of energy cryptos as alternative investments in portfolio management, and also whether different volatility models such as ARMA and machine learning can help investors make better informed decisions in investments. The methodology utilizes the traditional Markowitz mean-variance framework to obtain optimized portfolio risk and return combinations, and also principal component analysis to derive an energy crypto index. Different volatility measures, derived from the Cornish-Fisher adjusted variance, ARMA family classes and machine learning models are used to compare efficient portfolios which include or exclude the energy crypto index. The different models are assessed using the Sharpe and Sortino portfolio performance measures. Daily data is used, spanning from 21st November 2017 to 31st January 2019. Findings.
SSRN
Suppose that asset pricing factors are just p-hacked noise. How much p-hacking is required to produce the 300 factors documented by academics? I show that, if 10,000 academics generate 1 factor every minute, it takes 15 million years of p-hacking. This absurd conclusion comes from applying the p-hacking theory to published data. To fit the fat right tail of published t-stats, the p-hacking theory requires that the probability of publishing t-stats < 6.0 is infinitesimal. Thus it takes a ridiculous amount of p-hacking to publish a single t-stat. These results show that p-hacking alone cannot explain the factor zoo.