Research articles for the 2020-03-04
SSRN
This project explores capital markets risk exposure from water use in key industrial sectors in the Great Lakes region, represented by a subset of the region's largest companies and water users. The largest industrial water users in the Great Lakes region include (in decreasing order): thermoelectric, industrial, domestic/public supply, and commercial sectors. It is salient to make the distinction between water withdrawal and consumptive use, whereby the former is largely returned to the source reservoir after use in business operations, and the latter is removed from available supplies.Industry-specific water risks can be viewed through several lenses: watershed stewardship, impact of water as a natural resource constraint on corporate operations, and risk pricing of water in the capital markets as a result of curtailed operations and growth. The approach taken here builds on portfolio theory by integrating share price trends, with corporate accounting and voluntary disclosure data to extract a share price volatility risk metric - waterBeta - reflective of water and weather risk. The approach leverages signal processing waterBeta algorithms developed by Equarius Risk Analytics, a fintech firm, which prices water/weather risk directly into share price volatility, as a risk premium. The signal is derived from value-at-risk (VaR) models, which captures the short term âtailâ of extreme market volatility risks in share price behavior relative to industry and sector-specific benchmarks. Simply put, a higher waterBeta means a company is more prone to capital market volatility as a result of climate risks. Our results indicate that, by comparing nine companies across four industry sectors, the waterBeta signal is lowest for utilities, followed by health care, consumer discretionary, and industrials. Companies with high waterBeta tend to exhibit a higher degree of tail risk volatility in their short term share price, have a high percentage of facilities operating in water stressed regions, and exhibit low water intensities (WI). Interestingly, these same high waterBeta companies also tend to have high fixed asset turnover ratios, indicating high waterBeta companies are more dependent on fixed assets. Conversely, low waterBeta companies exhibit low VaR, high water intensities and a high percent of facilities in water stressed locations. However, these companies have low fixed asset turnover ratios, and are thus inefficient at generating revenue from fixed assets. Even though our subset of companies was too small for sector-wide generalizations, it appears that when an entity has higher fixed asset turnover ratios, even small changes in water intensity or exposure to high water risk areas can have a significant impact on waterBeta. This is the case with Archer Daniels Midland (ADM). However, the opposite trend can be observed, and is exemplified by the thermoelectric companies, which are the most inefficient at generating revenue from fixed assets and have the highest WI but exhibit the lowest waterBeta values. This is largely due to the fact that thermoelectric plants/companies rely almost exclusively on surface water sources, such as the Great Lakes, and tend to have corporate/industry wide water risk management strategies in place, given their high dependency on water. It should be noted that this capital markets risk at this time provides limited feedback to the companies on how to address this volatility, given that the model is multiparametric. Addressing water intensity (how much water a company uses to generate revenue) only has impact if its efficiency to generate revenue from its physical assets can be addressed. We are currently identifying factors that enable more targeted corporate risk management actions. As noted, the sample in this study was small and regionally focused. Broader universes of companies across multiple sectors such as represented in the â500â index will serve to develop imputation and learning models to scale capital markets-based water risk observations.
SSRN
Peer-to-peer (P2P) lending, defined broadly as the use of non-bank online platforms that match borrowers with lenders, is arguably one of the most important innovations in the area of alternative finance. It changes the way lenders and borrowers interact, reconstructs the credit market by driving massive disintermediation, and reshapes our general understanding of financial systems.This Article analyzes the current state of the P2P lending market with the goal of developing policy recommendations to facilitate the safe growth of this important market segment. It starts by providing an extensive overview of the P2P lending market from four different perspectives: the financial intermediary role of the platforms, the characteristics of the market, benefits and risks faced by market participants, and its regulation in leading jurisdictions. This descriptive analysis demonstrates how the P2P lending market has changed over time and identifies recent trends, risks, and challenges that require regulatory attention.In light of this analysis, the Article then proceeds to develop three policy recommendations. First, it shows that P2P lending platforms, originally designed to serve as online marketplaces that only match lenders with borrowers, have gradually evolved into new financial intermediaries that perform various brokerage activities and provide tools intended to help lenders manage their credit risks. It then argues that regulation should be modified to better suit this new financial intermediary role and discusses key considerations. Second, the Article proposes imposing consistent disclosure standards tailored to the characteristics of different types of P2P lending platforms. It presents specific examples of such disclosure requirements and provides justifications for imposing them. Finally, the article outlines key concerns related to the increasing involvement of institutional actors in P2P lending platforms â" adverse selection among different types of lenders and growing financial stability risks â" and discusses their regulatory implications.
arXiv
In this paper, we propose a neural network-based method for approximating expected exposures and potential future exposures of Bermudan options. In a first phase, the method relies on the Deep Optimal Stopping algorithm (DOS) proposed in \cite{DOS}, which learns the optimal stopping rule from Monte-Carlo samples of the underlying risk factors. Cashflow paths are then created by applying the learned stopping strategy on a new set of realizations of the risk factors. Furthermore, in a second phase the risk factors are regressed against the cashflow-paths to obtain approximations of pathwise option values. The regression step is carried out by ordinary least squares as well as neural networks, and it is shown that the latter performs more accurate approximations.
The expected exposure is formulated, both in terms of the cashflow-paths and in terms of the pathwise option values and it is shown that a simple Monte-Carlo average yields accurate approximations in both cases. The potential future exposure is estimated by the empirical $\alpha$-percentile.
Finally, it is shown that the expected exposures, as well as the potential future exposures can be computed under either, the risk neutral measure, or the real world measure, without having to re-train the neural networks.
SSRN
Using hand-collected data on CEO appointments during hedge fund activism campaigns, this study examines whether shareholder involvement in CEO selection leads to the hiring of better CEOs. The results indicate that appointments of CEOs who are hired with activist influence trigger more favorable stock market reactions and are followed by stronger profitability improvements compared with several CEO appointment control samples. Analyses of the channels suggest that activists facilitate the hiring of outsiders and more experienced CEOs, and their involvement is associated with a more thorough CEO search process, notably the formation of CEO search committees and the use of executive search firms. These findings contribute to the literature on CEO turnover, which tends to focus on the decision to lay off incumbent CEOs but provides limited insights into CEO recruiting.
arXiv
While standard estimation assumes that all datapoints are from probability distribution of the same fixed parameters $\theta$, we will focus on maximum likelihood (ML) adaptive estimation for nonstationary time series: separately estimating parameters $\theta_T$ for each time $T$ based on the earlier values $(x_t)_{t<T}$ using (exponential) moving ML estimator $\theta_T=\arg\max_\theta l_T$ for $l_T=\sum_{t<T} \eta^{T-t} \ln(\rho_\theta (x_t))$ and some $\eta\in(0,1]$. Computational cost of such moving estimator is generally much higher as we need to optimize log-likelihood multiple times, however, in many cases it can be made inexpensive thanks to dependencies. We focus on such example: exponential power distribution (EPD) $\rho(x)\propto \exp(-|(x-\mu)/\sigma|^\kappa/\kappa)$ family, which covers wide range of tail behavior like Gaussian ($\kappa=2$) or Laplace ($\kappa=1$) distribution. It is also convenient for such adaptive estimation of scale parameter $\sigma$ as its standard ML estimation is $\sigma^\kappa$ being average $\|x-\mu\|^\kappa$. By just replacing average with exponential moving average: $(\sigma_{T+1})^\kappa=\eta(\sigma_T)^\kappa +(1-\eta)|x_T-\mu|^\kappa$ we can inexpensively make it adaptive. It is tested on daily log-return series for DJIA companies, leading to essentially better log-likelihoods than standard (static) estimation, surprisingly with optimal $\kappa$ tails types varying between companies. Presented general alternative estimation philosophy provides tools which might be useful for building better models for analysis of nonstationary time-series.
arXiv
Algorithmic trading systems are often completely automated, and deep learning is increasingly receiving attention in this domain. Nonetheless, little is known about the robustness properties of these models. We study valuation models for algorithmic trading from the perspective of adversarial machine learning. We introduce new attacks specific to this domain with size constraints that minimize attack costs. We further discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models. Finally, we investigate the feasibility of realistic adversarial attacks in which an adversarial trader fools automated trading systems into making inaccurate predictions.
arXiv
Stock market prediction has been a classical yet challenging problem, with the attention from both economists and computer scientists. With the purpose of building an effective prediction model, both linear and machine learning tools have been explored for the past couple of decades. Lately, deep learning models have been introduced as new frontiers for this topic and the rapid development is too fast to catch up. Hence, our motivation for this survey is to give a latest review of recent works on deep learning models for stock market prediction. We not only category the different data sources, various neural network structures, and common used evaluation metrics, but also the implementation and reproducibility. Our goal is to help the interested researchers to synchronize with the latest progress and also help them to easily reproduce the previous studies as baselines. Base on the summary, we also highlight some future research directions in this topic.
SSRN
Using detailed micro-level data, we show that individuals' beliefs about climate change influence their choice and level of flood insurance coverage. Our empirical strategy exploits the heterogeneous impact of widening partisan polarization on climate change beliefs after the 2016 general election. We find that, in areas where flood insurance is not mandatory, a one-standard-deviation drop in the fraction of adults who believe global warming is happening leads to a 26% drop in the demand for flood insurance. In areas where flood insurance is mandatory, a similar drop in beliefs is associated with a lower propensity to carry voluntary content coverage and a higher likelihood of choosing the maximum deductible amount. As a secondary test, we exploit the flood insurance premium increases due to the Biggert-Waters Flood Insurance Reform Act of 2012. We show that homeowners who do not believe global warming is happening were more likely to terminate mandatory flood insurance coverage by prepaying mortgages.
SSRN
This paper studies the extreme dependencies between energy, agriculture and metal commodity markets, with a focus on local co-movements, allowing the identification of asymmetries and changing trend in the degree of co-movements. More precisely, starting from a non-parametric mixture copula, we use a novel copula-based local Kendallâs tau approach to measure nonlinear local dependence in regions. In all pairs of commodity indexes, we find increased co-movements in extreme situations, a stronger dependence between energy and other commodity markets at lower tails, and a âV-typeâ local dependence for the energy-metal pairs. The three-dimensional Kendallâs tau plot for upper tails in quantiles shows asymmetric co-movements in the energy-metal pairs, which tend to become negative at peak returns. Therefore, we show that the energy market can offer diversification solutions for risk management in the case of extreme bull market events.
SSRN
In this study, we examine the resilience of Bitcoin (BTC) to hedge Chinese aggregate and sectoral equity markets and the returns spillover to Altcoins onset the Novel Coronavirus outbreak. We observe that BTC is a weak hedge during the overall period and a weak safe haven onset the crisis. Besides, BTC is a weak hedge, diversifier and a weak safe haven for the sectoral equity indexes. Overall, gold outperforms BTC in hedging and safe haven perspectives with respect to Chinese equity markets. Lastly, we find that the rise in Altcoin prices are majorly due to spillover from BTC prices.
SSRN
CCPs are planning a big bang-like collateral and discounting transition for USD. In theory this transition is done with value compensation and risk exchange at fair market value. Such a transition would conduce to the absence of value and risk impact. But by definition of big bang, the transition is done in an illiquid market for which the fair theoretical value is unknown. To understand the actual impact on valuation and risk, one has to look at the practical details of the transition and how the absence of data for half of the required theoretical quantities is overcome in practice. The resulting situation prompts exotic convexity adjustments for cleared swap and unknown valuation for non-cleared products.
SSRN
Based on internet search behavior, I create a novel measure of individual investor attention to financial information, such as annual reports and earnings, relative to other value-relevant information, such as price trends and products. Unlike existing proxies for retail attention, this measure is associated with stronger price reactions to earnings news. I find that, when individual investor attention to financial information is high, the post-earnings announcement drift and the underreaction to earnings are weaker. However, the overreaction to accruals is stronger. The last result holds neither for sophisticated investor attention or passive attention driven by the media.
arXiv
In the presence of monotone information, stochastic Thiele equations describing the dynamics of state-wise prospective reserves are closely related to the classic martingale representation theorem. When the information utilized by the insurer is non-monotone, classic martingale theory does not apply. By taking an infinitesimal approach, we derive generalized stochastic Thiele equations that allow for information discarding. The results and their implication in practice are illustrated via examples where information is discarded upon and after stochastic retirement.
SSRN
Firms with high dispersion in analyst earnings forecasts tend to earn relatively low future stock returns. We examine whether investors' inability to unravel differences in firms' propensity to meet earnings expectations explains this phenomenon. We first demonstrate that the return predictability of forecast dispersion is concentrated only around earnings announcement dates. Next, we find that the return predictability of dispersion is driven by the component of dispersion that is explained by measures of expected analyst forecast pessimism and firms' expectations management incentives. These results are not a reflection of other factors such as differences of opinion, firms' exposure to earnings announcement premia, and short-sale constraints. Overall, we conclude that the forecast dispersion anomaly can be explained by investor mispricing of firms' participation in the earnings surprise game.
SSRN
The price of proprietary market data, data with low latency and with complete depth of book, sold by exchanges has risen dramatically in the previous decade. In fact, in October, 2018, the SEC failed to approve a requests by NASDAQ and NYSE-ARCA to raise the price of their data. The paper conceptually analyzes the nature of the demand for proprietary data and concludes that, for a significant part of the market, data from different exchanges are complementary--buying NASDAQ proprietary data increases the usefulness of NYSE-ARCA data. As a consequence, we should not expect competition between the 13 exchanges to constrain prices. This is in contrast to net trading fees, which are driven by competition to reasonable levels.
SSRN
In this paper, we revisit the equity premium puzzle reported in 1985 by Mehra and Prescott. We show that the large equity premium that they report can be explained by choosing a more appropriate distribution for the return data. We demonstrate that the high-risk aversion value observed by Mehra and Prescott may be attributable to the problem of fitting a proper distribution to the historical returns and partly caused by poorly fitting the tail of the return distribution. We describe a new distribution that better ts the return distribution and when used to describe historical returns can explain the large equity risk premium and thereby explains the puzzle.
arXiv
EBIs/ESOs substantially change the traditional production/service function because ESOs/EBIs can have different psychological effects(motivation or de-motivation), and can create intangible capital and different economic payoffs. Although Game Theory is flawed, it can be helpful in describing interactions in ESO/EBIs transactions. ESOs/EBIs involve two-stage games and there are no perfect Nash Equilibria for the two sub-games. The large number of actual and potential participants in these games significantly complicates resolution of equilibria and increases the dynamism of the games given that players are more sensitive to other peoples moves in such games. This article: a) analyzes how ESOs/EBIs affect traditional assumptions of production functions (in both the manufacturing and service sectors), b) analyzes ESOs/EBIs transactions using game theory concepts, c) illustrates some of the limitations of game theory.
SSRN
Prior studies of interest rate differentials between nonprofit credit unions (âfinancial cooperativesâ) and for-profit commercial banks generally find that credit unions offer lower loan rates and higher deposit rates. However, these studies likely suffer from selection bias since they rely on data at the level of the financial institution or branch which cannot account for demand-side (individual or household) or loan-level characteristics. We use household-level data from the Survey of Consumer Finances from 2001 to 2016 to compare auto loan rates for households that borrow from credit unions, banks and other financial institutions (captive lenders and auto finance companies). This allows us to control for important household- and loan-level characteristics, such as income, net-worth, education, age, marital status, ethnicity, home ownership, employment status, prior bankruptcies and delinquencies, and loan term and amount. We find that â" after accounting for these household- and loan-level characteristics, and loan origination year fixed effects â" households that receive new auto loans from credit unions pay 0.75 percentage points less on interest rates for new vehicles â" and 1.47 percentage points less on used vehicles â" relative to households that receive auto loans from banks. The credit union-bank interest rate differential is generally smaller than naïve estimates using institution-level interest rate data, but remains statistically significant and economically meaningful. Households that use captive lenders and auto finance companies generally pay rates that fall between banks and credit unions for new vehicle loans, but pay the highest rates for used vehicles. We provide a back-of-the-envelope estimate of the aggregated savings to credit union members borrowing from credit unions relative to banks and find that the savings from auto loans alone are larger than the entire value of the estimated credit union tax exemption. Therefore, we argue that credit unions charge lower auto loan rates due to both lower income taxes and their member-oriented objectives as nonprofit cooperatives. We argue that alternative explanations for lower rates at credit unions â" such as the extent of indirect auto lending, auto refinancing, informational advantages, and cross-subsidization across loan products and services â" are unlikely to explain the results.
SSRN
Expectations about macro-finance variables, such as inflation, vary significantly across genders, even within the same household. We conjecture that traditional gender roles expose women and men to different economic signals in their daily lives, which in turn produce systematic variation in expectations. Using unique data on the contributions of men and women to household grocery chores, their resulting exposure to price signals, and their inflation expectations, we show that the gender expectations gap is tightly linked to participation in grocery shopping. We also document a gender gap in other economic expectations and discuss how it might affect economic choices.
SSRN
We assess the ability of an information aggregation mechanism that operates in the over-the-counter market for financial derivatives to reduce valuation uncertainty among market participants. The analysis is based on a unique dataset of price estimates for S&P 500 index options that major financial institutions provide to a consensus pricing service. We consider two dimensions of uncertainty: uncertainty about fundamental asset values and strategic uncertainty about competitors' valuations. Through structural estimation, we obtain empirical measures of fundamental and strategic uncertainty that are based on market participantsâ posterior beliefs. We show that the main contribution of the consensus pricing service is to reduce its subscribers' uncertainty about competitors' valuations.
SSRN
Firms go public to make acquisitions, but private firms benefit from lower regulatory cost. Investment by newly public firms may be limited if managers need to focus on compliance instead of growth. Exploiting a 2012 US policy reform, we show that when regulatory cost is lower, firms make more acquisitions, do so more quickly after listing, and also increase other forms of investment. Examining potential unintended consequences of reduced regulation, we find that opportunistic bidding arising from higher information asymmetry does not explain these results. We inform the ongoing policy debate on broadening the scale and scope of regulatory relief.
SSRN
The Lightning Network is a decentralized payment network built on top of a block-chain, in which intermediary nodes provide a trust-less routing service for end users. We provide an overview of the current state of the network and show that it can be well approximated by a scale free generative model with a fitness parameter, which suggests that nodes behave strategically on the network. Those strategic interactions between nodes can be described by a Bertrand competition model with capacity constraints. We show that there is a unique equilibrium in which a centralized network is never optimal, and the routing fee is strictly greater than the marginal cost. When nodes are heterogeneous in their opportunity cost of capital only, the equilibrium network structure can match the current state of the network.
SSRN
This paper investigates the simultaneous determinants of corporate capital structure and bond spread of non-financial companies between 1998 and 2016, whilst controlling for the impacts of institutional, geographical, and political factors. It has been established in the development finance literature that a countryâs financial and legal systems have a significant impact on the capacity of its private sector to raise investment funding. Our results show that the impacts of Common Law and French Law systems, and of âmarket-basedâ/âbank-basedâ systems disappear once institutional variables, such as country income level and the effectiveness of government, of the rule of law, and regulatory quality are taken into account. Institutional factors determine capital structure, but not corporate risk. Both variables interact significantly with each other, whilst profitability, tangibility and macroeconomic performance were found to be the common determinants of both leverage and corporate bond spreads.
SSRN
This work analyzes the classic trade-off theory of capital structure in a dynamic model where equity holders do not have any dynamic commitment power. The equilibrium analyzed in this paper depends on the firm's whole history instead of just the firm's current income and debt level. I have developed a methodology to determine whether a debt issuance policy is supportable in equilibrium. This work proves that under mild conditions, the equity value in non-Markov equilibrium is higher than the equity value in Markov equilibrium, the one depicted in DeMarzo and He (2017). The equity holders get benefits when the financial market considers the firm's whole history which I call as reputation. This work shows that equity holders can implement a Leverage Target policy in equilibrium under certain conditions. The Leverage Target policy breaks the Leverage Ratchet Effect discussed in Admati, DeMarzo, Hellwig and Pfleiderer (2018). With a specific leverage target and certain parameters, the equity value of the Leverage Target policy is the largest one among all equilibrium equity values.
arXiv
We find economically and statistically significant gains from using machine learning to dynamically allocate between the market index and the risk-free asset. We model the market price of risk to determine the optimal weights in the portfolio: reward-risk market timing. This involves forecasting the direction of next month's excess return, which gives the reward, and constructing a dynamic volatility estimator that is optimized with a machine learning model, which gives the risk. Reward-risk timing with machine learning provides substantial improvements in investor utility, alphas, Sharpe ratios, and maximum drawdowns, after accounting for transaction costs, leverage constraints, and on a new out-of-sample test set. This paper provides a unifying framework for machine learning applied to both return- and volatility-timing.
SSRN
While there is growing evidence of persistent or even permanent output losses from financial crises, the causes remain unclear. One candidate is intangible capital - a rising driver of economic growth that, being non-pledgeable as collateral, is vulnerable to financial frictions. By sheltering intangible investment from financial shocks, counter-cyclical macroeconomic policy could strengthen longer-term growth, particularly so where strong product market competition prevents firms from self-financing their investments through rents. Using a rich cross-country firm-level dataset and exploiting heterogeneity in firm-level exposure to the sharp and unforeseen tightening of credit conditions around September 2008, we find strong support for these theoretical predictions. The quantitative implications are large, highlighting a powerful stabilizing role for macroeconomic policy through the intangible investment channel, and its complementarity with pro-competition product market deregulation.
SSRN
Common ownership (also called horizontal shareholding) refers to a stock investorâs owner-ship of minority stakes in multiple competing firms. Recent empirical studies have purported to show that institutional investorsâ common ownership reduces competition among commonly owned competitors. This Article considers the legality of âmereâ common ownershipâ"horizontal shareholding that is not accompanied by any sort of illicit agreement (e.g., a hub-and-spoke conspiracy) or the holding of control-conferring sharesâ"under the U.S. antitrust laws. Prominent antitrust scholars and the leading treatise have concluded that mere common ownership that has the incidental effect of lessening market competition may violate both Clayton Act Section 7 and Sherman Act Section 1. This Article demonstrates otherwise. Competition-lessening instances of mere common ownership do not violate Section 7 because they fall within the provisionâs âsolely for investmentâ exemption, which the scholars calling for condemnation have misinterpreted. Mere common ownership does not run afoul of Section 1 because it lacks the sort of agreement (contract, combination, or conspiracy) required for liability under that provision. From a social welfare standpoint, these legal outcomes are desirable. Condemning mere common ownership under the antitrust laws would likely entail significant marginal costs, while the marginal benefits such condemnation would secure are speculative. Accordingly, courts and enforcers should not, on the current empirical record, stretch the antitrust laws to condemn mere common ownership.
arXiv
This paper investigates optimal consumption, investment, and healthcare spending under Epstein-Zin preferences. Given consumption and healthcare spending plans, Epstein-Zin utilities are defined over an agent's random lifetime, partially controllable by the agent as healthcare reduces Gompertz' natural growth rate of mortality. In a Black-Scholes market, the stochastic optimization problem is solved through the associated Hamilton-Jacobi-Bellman (HJB) equation. Compared with classical Epstein-Zin utility maximization, the additional controlled mortality process complicates the uniqueness of Epstein-Zin utilities and verification arguments. A combination of probabilistic arguments and analysis of the HJB equation are required to resolve the challenges. In contrast to prior work under time-separable utilities, Epstein-Zin preferences largely facilitate calibration. In five different countries we examined, the model-generated mortality closely approximates actual mortality data; moreover, the calibrated efficacy of healthcare is in close agreement with empirical studies on healthcare across countries.
arXiv
We apply numerical dynamic programming techniques to solve discrete-time multi-asset dynamic portfolio optimization problems with proportional transaction costs and shorting/borrowing constraints. Examples include problems with multiple assets, and many trading periods in a finite horizon problem. We also solve dynamic stochastic problems, with a portfolio including one risk-free asset, an option, and its underlying risky asset, under the existence of transaction costs and constraints. These examples show that it is now tractable to solve such problems.
SSRN
This paper studies a non-concave optimization problem under a Value-at-Risk (VaR) or an Expected Shortfall (ES) constraint. The non-concavity of the problem stems from the non-linear payoff structure of the optimizing investor. We obtain the closed-form optimal wealth with an ES constraint as well as with a VaR constraint respectively, and explicitly calculate the optimal trading strategy for a CRRA (i.e., constant relative risk aversion) utility function. In our non-concave optimization problem, we find that for any VaR-constraint with an arbitrary risk level, there exists an ES-constraint leading to the same investment strategy, assuming that the regulation only protects the debt holders' benefit to a certain level. This differs from the conclusion drawn in Basak and Shapiro (2001) for the concave optimization problem, where VaR and ES lead to different solutions.
arXiv
In this paper we propose a generalization of the Deep Galerking Method (DGM) of \cite{dgm} to deal with Path-Dependent Partial Differential Equations (PPDEs). These equations firstly appeared in the seminal work of \cite{fito_dupire}, where the functional It\^o calculus was developed to deal with path-dependent financial derivatives contracts. The method, which we call Path-Dependent DGM (PDGM), consists of using a combination of feed-forward and Long Short-Term Memory architectures to model the solution of the PPDE. We then analyze several numerical examples, many from the Financial Mathematics literature, that show the capabilities of the method under very different situations.
arXiv
A commonly used stochastic model for derivative and commodity market analysis is the Barndorff-Nielsen and Shephard (BN-S) model. Though this model is very efficient and analytically tractable, it suffers from the absence of long range dependence and many other issues. For this paper, the analysis is restricted to crude oil price dynamics. A simple way of improving the BN-S model with the implementation of various machine learning algorithms is proposed. This refined BN-S model is more efficient and has fewer parameters than other models which are used in practice as improvements of the BN-S model. The procedure and the model show the application of data science for extracting a "deterministic component" out of processes that are usually considered to be completely stochastic. Empirical applications validate the efficacy of the proposed model for long range dependence.
arXiv
We consider dynamic risk measures induced by Backward Stochastic Differential Equations (BSDEs) in enlargement of filtration setting. On a fixed probability space, we are given a standard Brownian motion and a pair of random variables $(\tau, \zeta) \in (0,+\infty) \times E$, with $E \subset \mathbb{R}^m$, that enlarge the reference filtration, i.e., the one generated by the Brownian motion. These random variables can be interpreted financially as a default time and an associated mark. After introducing a BSDE driven by the Brownian motion and the random measure associated to $(\tau, \zeta)$, we define the dynamic risk measure $(\rho_t)_{t \in [0,T]}$, for a fixed time $T > 0$, induced by its solution. We prove that $(\rho_t)_{t \in [0,T]}$ can be decomposed in a pair of risk measures, acting before and after $\tau$ and we characterize its properties giving suitable assumptions on the driver of the BSDE. Furthermore, we prove an inequality satisfied by the penalty term associated to the robust representation of $(\rho_t)_{t \in [0,T]}$ and we discuss the dynamic entropic risk measure case, providing examples where it is possible to write explicitly its decomposition and simulate it numerically.
arXiv
We show that adversarial reinforcement learning (ARL) can be used to produce market marking agents that are robust to adversarial and adaptively chosen market conditions. To apply ARL, we turn the well-studied single-agent model of Avellaneda and Stoikov [2008] into a discrete-time zero-sum game between a market maker and adversary, a proxy for other market participants who would like to profit at the market maker's expense. We empirically compare two conventional single-agent RL agents with ARL, and show that our ARL approach leads to: 1) the emergence of naturally risk-averse behaviour without constraints or domain-specific penalties; 2) significant improvements in performance across a set of standard metrics, evaluated with or without an adversary in the test environment, and; 3) improved robustness to model uncertainty. We empirically demonstrate that our ARL method consistently converges, and we prove for several special cases that the profiles that we converge to are Nash equilibria in a corresponding simplified single-stage game.
SSRN
Lenders are unwilling to accept lower credit spreads for secured debt relative to unsecured debt when a firm is healthy. However, they accept significantly lower credit spreads for secured debt when a firmâs credit quality deteriorates, the economy slows, or average credit spreads widen. This contingent valuation of collateral or security, coupled with the borrower perceiving a loss of operational and financial flexibility when issuing secured debt, may explain why firms issue secured debt on a contingent basis; they issue more when their credit quality deteriorates, the economy slows, and average credit spreads widen.
SSRN
We study a unique data set of all client trades that the six largest Canadian dealers sent to U.S. equity markets in 2014-2015. Contrary to the public perception, Canadian dealers use U.S. markets only lightly and send less than 5% of their $-volume to the U.S.; on 60% of security-day observations, they send no order flow to the U.S. Usage of U.S. markets differs significantly among the dealers: one uses almost exclusively exchanges, but most trade only off-exchange, in so-called dark markets or directly with other dealers. A strong factor that influences the U.S.-bound routing decision is a broker's volume: the larger the volume, the more likely it is that the broker uses U.S. markets. Overall, the data indicate that brokers' U.S.-bound routing decisions are driven by the size of client order flow which requires access to the additional liquidity offered by U.S. markets. For a 3-month period, one dealer made extensive use of U.S. wholesalers, and we find no evidence for a negative impact on the trading costs for its Canadian flow.
SSRN
Recently, a number of structured funds have emerged as public-private partnerships with the intent of promoting investment in renewable energy in emerging markets. These funds seek to attract institutional investors by tranching the asset pool and issuing senior notes with a high credit quality. Financing of renewable energy (RE) projects is achieved via two channels: small RE projects are financed indirectly through local banks that draw loans from the fundâs assets, whereas large RE projects are directly financed from the fund. In a bottom-up Gaussian copula framework, we examine the diversification properties and RE exposure of the senior tranche. To this end, we introduce the LH++ model, which combines a homogeneous infinitely granular loan portfolio with a finite number of large loans. Using expected tranche percentage notional (which takes a similar role as the default probability of a loan), tranche prices and tranche sensitivities in RE loans, we analyse the risk profile of the senior tranche. We show how the mix of indirect and direct RE investments in the asset pool affects the sensitivity of the senior tranche to RE investments and how to balance a desired sensitivity with a target credit quality and target tranche size.
SSRN
We examine the effect that foreign competition has on firmsâ default risk, and document a strong and robust negative association. Utilizing a large sample of public U.S. manufacturing firms and industry-level import penetration data, we find that an increase in import penetration from the 25th to the 75th percentile leads to a reduction in corporate default risk of roughly 15.5%. These results hold after accounting for potential endogeneity concerns. We also document a negative association between import penetration and the incidence of bankruptcy as well as the incidence of covenant violations. Further tests reveal that the negative association between import penetration and default risk is most pronounced for firms that have weaker governance structures, suggesting that foreign competition is a substitute for effective corporate governance. Our paper contributes to the ongoing discourse on the costs and benefits of trade liberalization, documenting the âbright sideâ of foreign competition.
SSRN
This paper applies a data envelopment analysis (DEA) to study the effect of non-bank financial intermediation on bank efficiency in the eight EU jurisdictions individually monitored under the Financial Stability Board (FSB) Global Shadow Banking Monitoring Report in the period 2014-2016.The efficiency analysis is conducted by applying a profit-based input-oriented DEA variable returns-to-scale model in a two-stage procedure. In the first stage, the average DEA efficiency scores are calculated. We find evidence that the average aggregate technical efficiency increased on average from 2014 to 2016. In the second stage, the impact of environmental factors like the Financial Stability Boardâs (FSB) narrow measure on non-bank financial intermediation as well as macroeconomic factors is analyzed by conducting a Tobit regression. The results provide evidence of a negative relationship between non-bank financial intermediation and average bank efficiency and a positive impact of GDP on average bank efficiency. These novel empirical findings contribute to the policy discussions on the effect of non-bank financial intermediation on bank performance and thus on financial stability. Moreover, our analysis provides unique initial evidence in favor of the hypothesis that increased non-bank financial intermediation might result into a reduction of bank profitability.
SSRN
Previous research documents two puzzling results that cast doubt on the usefulness of accounting information to investors: the declining power of street EPS in explaining earnings announcement returns and increasing price reactions to earnings announcements. I show this evidence is due to omitting non-street EPS surprises from the earnings announcement analysis. When I control for âother surprisesâ based on seven I/B/E/S most frequent forecasts: sales, gross margin, EBITDA, operating and net income, GAAP earnings and cash flows, the adjusted R2 increases over fourfold and street EPS lose explanatory power. The âother surprisesâ are useful on their own and help interpret street EPS surprises, particularly small street EPS surprises, and earnings news of high accrual and growth stocks, and of high institutional ownership firms. Overall, the study identifies an important set of previously omitted variables that investors use in assessing earnings announcement day information.