Research articles for the 2020-11-09
arXiv
The existing approaches to sparse wealth allocations (1) are suboptimal due to the bias induced by $\ell_1$-penalty; (2) require the number of assets to be less than the sample size; (3) do not model factor structure of stock returns in high dimensions. We address these shortcomings and develop a novel strategy which produces unbiased and consistent sparse allocations. We demonstrate that: (1) failing to correct for the bias leads to low out-of-sample portfolio return; (2) only sparse portfolios achieved positive cumulative return during several economic downturns, including the dot-com bubble of 2000, the financial crisis of 2007-09, and COVID-19 outbreak.
SSRN
We construct a new index of global equity market risk (EMR) using market interconnectedness and volatilities. We study the relationship between our EMR and the VIX over the last two decades. The EMR is shown to be a novel approach to measuring global market risk, and an alternative to the VIX. Using data of 20 major stock markets, including G10 economies, we find spikes in our EMR index during the dotcom bubble, the global financial crisis, the European sovereign debt crisis, and the novel coronavirus pandemic. The result shows that the global financial crisis and the Covid-19 induced crisis record the historic highest spikes in financial market risk, suggesting stronger evidence of contagion in both periods.
arXiv
Using the concept of self-decomposable subordinators introduced in Gardini et al. [11], we build a new bivariate Normal Inverse Gaussian process that can capture stochastic delays. In addition, we also develop a novel path simulation scheme that relies on the mathematical connection between self-decomposable Inverse Gaussian laws and L\'evy-driven Ornstein-Uhlenbeck processes with Inverse Gaussian stationary distribution. We show that our approach provides an improvement to the existing simulation scheme detailed in Zhang and Zhang [23] because it does not rely on an acceptance-rejection method. Eventually, these results are applied to the modelling of energy markets and to the pricing of spread options using the proposed Monte Carlo scheme and Fourier techniques
arXiv
Subdiffusion is a well established phenomenon in physics. In this paper we apply the subdiffusive dynamics to analyze financial markets. We focus on the financial aspect of time fractional diffusion model with moving boundary i.e. American and barrier option pricing in the subdiffusive Black-Scholes model. Two computational methods for valuing American options in the considered model are proposed. The weighted scheme of the finite difference (FD) method is derived and the main properties of the method are presented. The Longstaff-Schwartz method is applied for the discussed model and is compared to the previous method. In the article it is also shown how to valuate numerically wide range of barrier options using the FD approach. The proposed FD method has $2-\alpha$ order of accuracy with respect to time, where $\alpha\in(0,1)$ is the subdiffusion parameter, and $2$ with respect to space.
arXiv
Stock classification is a challenging task due to high levels of noise and volatility of stocks returns. In this paper we show that using transfer learning can help with this task, by pre-training a model to extract universal features on the full universe of stocks of the S$\&$P500 index and then transferring it to another model to directly learn a trading rule. Transferred models present more than double the risk-adjusted returns than their counterparts trained from zero. In addition, we propose the use of data augmentation on the feature space defined as the output of a pre-trained model (i.e. augmenting the aggregated time-series representation). We compare this augmentation approach with the standard one, i.e. augmenting the time-series in the input space. We show that augmentation methods on the feature space leads to $20\%$ increase in risk-adjusted return compared to a model trained with transfer learning but without augmentation.
arXiv
Motivated by recent applications of sequential decision making in matching markets, in this paper we attempt at formulating and abstracting market designs in peer lending. In the rest of this paper, what will follow is a paradigm to set the stage for how peer lending can be conceived from a matching market perspective with sequential design making embedded in it. We attempt at laying the stepping stones toward understanding how sequential decision making can be made more flexible in peer lending platforms and as a way to devise more fair and equitable outcomes for both borrowers and lenders. The goal of this paper is to provide some ideas on how and why lending platforms conceived from the perspective of matching markets can allow for incorporating fairness and equitable outcomes when we design lending platforms.
SSRN
We study the effects of the diversification of funding sources on the financing conditions for firms. We exploit a regulatory reform which took place in Italy in 2012, i.e., the introduction of âminibondsâ, which opened a new market-based funding opportunity for unlisted firms. Using the Italian Credit Register, we investigate the impact of minibond issuance on bank credit conditions for issuer firms, both at the firm-bank and firm-level. We compare new loans granted to issuer firms with new loans concurrently granted to similar non-issuer firms. We find that issuer firms obtain lower interest rates on bank loans of the same maturity than non-issuer firms, suggesting an improvement in their bargaining power with the banks. In addition, issuer firms reduce the amount of used bank credit but increase the overall amount of available external funds, pointing to a substitution with bank credit and to a diversification of corporate funding sources. Studying their ex-post performance, we find that issuer firms expand their total assets and fixed assets, and also raise their leverage.
SSRN
It is a staggering statistic that half of the population consistently outperform the remainder when it comes to financial literacy. But could the measurement tools have inherent gender bias? This study investigates the reasons for selecting the non-response option in financial literacy questions, including numerical self-efficacy, risk aversion, overconfidence and socio-economic status. Our analysis finds overwhelming evidence that females avoid answering these financial literacy questions, and we infer that having an interest in money matters at school age is a potential pathway for effective intervention. These results are important for shaping policy and providing resources that close the gap.
SSRN
Investment banks like Goldman Sachs have started a "guaranteed close" business where investors looking to buy or sell shares of a certain stock can get a guarantee from the bank to execute their orders at the close price set on the primary exchange. Daily trading volume through this venue has been increasing rapidly, reaching about one-third of the daily volume through close auction in 2018. Using the TAQ data and a quasi-experimental shock -- NYSE fee cut in 2018, we find that when the fraction of trades through "guaranteed close" increases, the informativeness of close price increases. We develop a model where a bank conducting "guaranteed close" business competes with the exchange on transaction fees, and gains profit from trading strategically utilizing the order flow information. The bank's trading activity concentrates the price-relevant information into the exchange. Consequently, the "guaranteed close" improves price discovery at the market close.
SSRN
In this paper, we introduce the concept of standardized call function and we obtain a new approximating formula for the Black and Scholes call function through the hyperbolic tangent. Differently from other solutions proposed in the literature, this formula is invertible; hence, it is useful for pricing and risk management as well as for extracting the implied volatility from quoted options. The latter is of particular importance since it indicates the risk of the underlying and it is the main component of the optionâs price. That is what trading desks focus on. Further we estimate numerically the approximating error of the suggested solution and, by comparing our results in computing the implied volatility with the most common methods available in the literature, we discuss the challenges of this approach.
arXiv
In this paper, we develop a theory of common decomposition for two correlated Brownian motions, in which, by using change of time method, the correlated Brownian motions are represented by a triplet of processes, $(X,Y,T)$, where $X$ and $Y$ are independent Brownian motions. We show the equivalent conditions for the triplet being independent. We discuss the connection and difference of the common decomposition with the local correlation model. Indicated by the discussion, we propose a new method for constructing correlated Brownian motions which performs very well in simulation. For applications, we use these very general results for pricing two-factor financial derivatives whose payoffs rely very much on the correlations of underlyings. And in addition, with the help of numerical method, we also make a discussion of the pricing deviation when substituting a constant correlation model for a general one.
arXiv
We consider shared listings on two South African equity exchanges: the Johannesburg Stock Exchange (JSE) and the A2X Exchange. A2X is an alternative exchange that provides for both shared listings and new listings within the financial market ecosystem of South Africa. From a science perspective it provides the opportunity to compare markets trading similar shares, in a similar regulatory and economic environment, but with vastly different liquidity, costs and business models. A2X currently has competitive settlement and transaction pricing when compared to the JSE, but the JSE has deeper liquidity. In pursuit of an empirical understanding of how these differences relate to their respective price response dynamics, we compare the distributions and auto-correlations of returns on different time scales; we compare price impact and master curves; and we compare the cost of trading on each exchange. This allows us to empirically compare the two markets. We find that various stylised facts become similar as the measurement or sampling time scale increase. However, the same securities can have vastly different price responses irrespective of time scales. This is not surprising given the different liquidity and order-book resilience. Here we demonstrate that direct costs dominate the cost of trading, and the importance of competitively positioning cost ceilings. Universality is crucial for being able to meaningfully compare cross-exchange price responses, but in the case of A2X, it has yet to emerge in a meaningful way due to the infancy of the exchange -- making meaningful comparisons difficult.
SSRN
There exists abundant academic literature showing that momentum, i.e. a positive correlation between initial ranking of stocks by their past returns and subsequent returns, is pervasive across different markets and time periods.Although recent criticism speculates on the disappearing of momentum returns, as-set managers have launched strategies to harvest the risk premia in form of well diversified equities funds based. In this research paper I delve deeper into the topic of the construction of concentrat-ed portfolios with less than 50 stocks. Based on monthly data for the US market universe I first investigate the consistency of momentum returns over the latest 20 years (1999-2019) with deciles analysis and then study the characteristics both of unconstrained and sector neutral concentrated portfolios. The empirical results show that momentum in the last decade (2010-2019), measured as the performance of a zero investment portfolio (long winners and short losers), is present but with minor intensity compared to the previous decade and that on the same period the top decile âlong onlyâ portfolio, built with previous winnersâ stocks, still keeps beating the markets index with better Sharpe and Sortino ratios.The results on concentrated portfolios, in particular portfolios with less than 10 stocks, show clear dependency on the given universe constituents, to make the analysis less dependent on particular universe constituents I propose to run the momentum strategy on 1000 random subsampled stocks universes and show empirically that the relation between the number of stocks in the portfolios and the corresponding performances is statistically significant monotonic (the less stocks the more performance). Finally, I report that a sector neutral portfolio, i.e. a portfolio with the same number of stocks for each industrial sector, shows superior risk return characteristics than unconstrained ones. In the last 20 years a long only portfolio based on overlapping sector neutral sub-portfolios with 10 stocks each, gained an annualized return of 11.3% with a Sharpe ratio of 0.59, compared to a 6.00% and 0.24 for the MSCI USA and a 8.5% and 0.38 for the equal weighted benchmark.
arXiv
We study contextual search, a generalization of binary search in higher dimensions, which captures settings such as feature-based dynamic pricing. Standard game-theoretic formulations of this problem assume that agents act in accordance with a specific behavioral model. In practice, however, some agents may not prescribe to the dominant behavioral model or may act in ways that are seemingly arbitrarily irrational. Existing algorithms heavily depend on the behavioral model being (approximately) accurate for all agents and have poor performance in the presence of even a few such arbitrarily irrational agents.
We initiate the study of contextual search when some of the agents can behave in ways inconsistent with the underlying behavioral model. In particular, we provide two algorithms, one built on robustifying multidimensional binary search methods and one on translating the setting to a proxy setting appropriate for gradient descent. Our techniques draw inspiration from learning theory, game theory, high-dimensional geometry, and convex analysis.
SSRN
Taxpayers could be the ultimate winners from government support for âmade-in-crisisâ startups through the right investment structures.
arXiv
Deep reinforcement learning (DRL) has reached super human levels in complex tasks like game solving (Go and autonomous driving). However, it remains an open question whether DRL can reach human level in applications to financial problems and in particular in detecting pattern crisis and consequently dis-investing. In this paper, we present an innovative DRL framework consisting in two sub-networks fed respectively with portfolio strategies past performances and standard deviations as well as additional contextual features. The second sub network plays an important role as it captures dependencies with common financial indicators features like risk aversion, economic surprise index and correlations between assets that allows taking into account context based information. We compare different network architectures either using layers of convolutions to reduce network's complexity or LSTM block to capture time dependency and whether previous allocations is important in the modeling. We also use adversarial training to make the final model more robust. Results on test set show this approach substantially over-performs traditional portfolio optimization methods like Markowitz and is able to detect and anticipate crisis like the current Covid one.
SSRN
We investigate the motivations and value implications of corporate philanthropy by exploiting a global sample of publicly listed firms from 45 countries that provide disaster-relief grants to affected communities. We argue that, while in general corporate philanthropy entails agency concerns, the saliency of large, attention-grabbing natural disasters amplifies the strategic benefits of donating.We find that the returns from donating increase with disaster severity and become positive for firms that rely more on reputation and social image. Returns are also higher for countries with low government relief support, for medium-sized donations, and for in-kind donations. Overall, our results highlight the strategic role of corporate philanthropy, which can lead to net increases in firm value and societal welfare if the strategic benefits of donating are sufficiently large.
SSRN
These appendices accompany my paper "Does Technology Lower the Cost of Education without Reducing Quality? A Financial Modeling Approach to Flipping the Classroom."https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3694205
SSRN
We study the reactions to job destructions on Twitter. We use information on large-scale job-destruction and job-creation events announced in the United Kingdom over the period 2013-2018. We match it with data collected on Twitter regarding the number and sentiments of the tweets posted around the time of the announcement and involving the company name. We show that job-destruction announcements immediately elicit numerous and strongly negative reactions. On the day of the announcement, the number of tweets and first-level replies sharply increases as does the negativity of the sentiments of the posted tweets. These reactions are systematically more important than reactions to job creations. We also show that they trigger significant losses in the market value of the downsizing firms. Our findings suggest that job destructions generate reputational costs for firms to the extent that they induce a strong negative buzz involving the company name.
SSRN
Venture-capital-backed startups are often crucibles of conflict between common and preferred shareholders, particularly around exit decisions. Such conflicts are so common, in fact, that they have catalyzed an emergent judicial precedent -- the Trados doctrine -- that requires boards to prioritize common shareholders' interest and to treat preferred shareholders as contractual claimants. We evaluate the Trados doctrine using a model of startup governance that interacts capital structure, corporate governance, and liability rules. The nature and degree of inter-shareholder conflict turns not only on the relative rights and options of equity participants, but also on a firm's intrinsic value as well as its value to potential third-party bidders. Certain combinations of these factors can cause both common and preferred shareholders' incentives to stray from value maximization. We show that efficient decisions can be induced by an "anti-Trados" rule that emphasizes preferred shareholders' interests and treats common shareholders as contractual claimants. The Trados doctrine, by contrast, cannot categorically reconcile private interests with value maximization. More generally, our model offers a precise mechanism through which corporate governance and capital structure jointly determine firm value.
arXiv
This paper presents how to use Chebyshev Tensors to compute dynamic sensitivities of financial instruments within a Monte Carlo simulation. Dynamic sensitivities are then used to compute Dynamic Initial Margin as defined by ISDA (SIMM). The technique is benchmarked against the computation of dynamic sensitivities obtained by using pricing functions like the ones found in risk engines. We obtain high accuracy and computational gains for FX swaps and Spread Options.
SSRN
Eaton Corporation, a diversified industrial conglomerate, had been changing its strategic focus and re-configuring its portfolio of businesses for the past 15 years. Through more than 70 acquisitions and 50 divestitures, Eaton had narrowed its strategic focus considerably and was now pursuing a strategy based on âintelligent powerâ (i.e., using digitally enabled products, data, and software to increase efficiency and reliability of its products and services). In January 2020, Eaton got an offer from Danfoss, a diversified Danish multinational, to buy its hydraulics business for $3.3 billion. Eaton CEO Craig Arnold must decide if it makes sense strategically to sell this business and if $3.3 billion is a fair price for the business.This short case shows how firms use discounted cash flow (DCF) analysis to make important strategic decisions. Rather than analyzing the free cash flows in detail, this case focuses on the cost of capital. Designed in this way, it helps students understand the intuition behind and the mechanics for calculating a firmâs weighted average cost of capital (WACC) using the capital asset pricing model (CAPM). An Appendix explains the intuition and derives the WACC formula; case data allows students to calculate the WACC and provides an opportunity to discuss the various assumptions. The case also explains the theoretical differences between a corporate and a divisional cost of capital, and vividly illustrates the potential for serious valuation errors if the incorrect WACC is used. Finally, and equally importantly, this case highlights one of the most successful black executives in corporate America (Craig Arnold) who would be the fifth black CEO to head a Fortune 500 company in 2019 if Eaton were domiciled in the U.S. (Eaton relocated to Ireland in 2012).Although this case was designed to emphasize the discount rate specifically, it can be used across two classes to teach DCF valuation more generally. On the first day, the instructor can teach the definition and calculation of FCFs (and terminal values); on the second day, the instructor can discuss the WACC and the resulting NPV.
SSRN
We examine the effect of bank mergers on the price and availability of credit in the residential mortgage market. We find that, compared to non-acquiring banks in the same local market, acquiring banks that gain large market shares charge significantly higher interest rates but also lend larger amounts on non-agency mortgages in the years following the acquisition, and these effects vary significantly across prime, Alt-A and subprime loans. The corresponding effects for mortgages sold to Fannie Mae and Freddie Mac are economically insignificant. Acquiring banks also increase approval rates for conventional mortgage applications but this effect is weaker for low-income, black and Hispanic applicants; and decrease approval rates for FHA mortgage applications, especially for low-income and non-white applicants.
arXiv
The structures, including technologies, industries, infrastructure and institutions, in a country have three attributes, namely, structurality, durationality, and transformality. The paper proposes a novel method to model the structural transformation in a market economy. With the common knowledge assumption, the paper proposes a generic model combining optimal control and optimal switching to study the static equilibrium and dynamic equilibrium of resource allocations under a given industrial structure and the equilibrium when the condition for transforming industrial structures arises by a social planner to maximize the representative household's utility in a market economy. The paper establishes the mathematical underpinning of the static equilibrium, dynamic equilibrium and structural equilibrium. The generic model and its equilibria are then extended to economies with complicated economic structures consisting of hierarchical production, composite consumption, technology adoption and innovation, infrastructure, and economic and political institutions. The paper concludes with a brief discussion of applications of the proposed methodology to economic development problems in other scenarios.
arXiv
In this paper, we consider a variety of multi-state Hidden Markov models for predicting and explaining the Bitcoin, Ether and Ripple returns in the presence of state (regime) dynamics. In addition, we examine the effects of several financial, economic and cryptocurrency specific predictors on the cryptocurrency return series. Our results indicate that the 4-states Non-Homogeneous Hidden Markov model has the best one-step-ahead forecasting performance among all the competing models for all three series. The superiority of the predictive densities, over the single regime random walk model, relies on the fact that the states capture alternating periods with distinct returns' characteristics. In particular, we identify bull, bear and calm regimes for the Bitcoin series, and periods with different profit and risk magnitudes for the Ether and Ripple series. Finally, we observe that conditionally on the hidden states, the predictors have different linear and non-linear effects.
arXiv
This paper identifies latent group structures in the effect of motherhood on employment by employing the C-Lasso, a recently developed, purely data-driven classification method. Moreover, I assess how the introduction of the generous German parental benefit reform in 2007 affects the different cluster groups by taking advantage of an identification strategy that combines the sharp regression discontinuity design and hypothesis testing of predicted employment probabilities. The C-Lasso approach enables heterogeneous employment effects across mothers, which are classified into an a priori unknown number of cluster groups, each with its own group-specific effect. Using novel German administrative data, the C-Lasso identifies three different cluster groups pre- and post-reform. My findings reveal marked unobserved heterogeneity in maternal employment and that the reform affects the identified cluster groups' employment patterns differently.
SSRN
We investigate how the growth of index-based investing impacts the intraday stock dynamics using a large high-frequency dataset, which consists of 1-second level trade data for all S&P 500 constituents from 2004 to 2018. We estimate intraday trading volume, volatility, correlation, and beta using estimators that are statistically efficient under market microstructure noise and observation asynchronicity. We find the intraday patterns indeed change substantially over time. For example, in the recent decade, the trading volume and correlation significantly increase at the end of trading session; the betas of different stocks start dispersed in the morning, but generally move towards one during the day. Besides, the daily dispersion in trading volume is high at the market open and low near the market close. These intraday patterns demonstrate the implication of the growth of index-based strategies and the active-open, passive-close intraday trading profile. We theoretically support our interpretation via a market impact model with time-varying liquidity provision from both single-stock and index-fund investors.
arXiv
We present a baseline stochastic framework for assessing inter-sectorial relationships in a generic economy. We show that - irrespective of the specific features of the technology matrix for a given country or a particular year - the Leontief multipliers (and any upstreamness/downstreamness indicator computed from the Leontief inverse matrix) follow a universal pattern, which we characterize analytically. We formulate a universal benchmark to assess the structural inter-dependence of sectors in a generic economy. Several empirical results on World Input-Output Database (WIOD, 2013 Release) are presented that corroborate our findings.
arXiv
We investigate the problem of learning undirected graphical models under Laplacian structural constraints from the point of view of financial market data. We show that Laplacian constraints have meaningful physical interpretations related to the market index factor and to the conditional correlations between stocks. Those interpretations lead to a set of guidelines that users should be aware of when estimating graphs in financial markets. In addition, we propose algorithms to learn undirected graphs that account for stylized facts and tasks intrinsic to financial data such as non-stationarity and stock clustering.
SSRN
This paper investigates how economic links from customer-supplier relationships affect liquidity commonality and its pricing. I show that a stock's liquidity co-moves with liquidity of its economically linked stocks and this liquidity commonality decreases with the level of information asymmetry on the stock. A long-short portfolio from the high-minus-low liquidity commonality with economically linked firms yields economically and statistically significant average returns, and these returns cannot be explained by majorly known systematic risk factors. The results imply that supply-chain networks are another important channel for liquidity risk.
SSRN
This paper considers how financial authorities should react to environmental threats beyond climate change. These include biodiversity loss, water scarcity, ocean acidification, chemical pollution and â" as starkly illustrated by the COVID-19 pandemic â" zoonotic disease transmission, among others. We first provide an overview of these nature-related financial risks (NRFR) and then show how the financial sector is both exposed to them and contributes to their development via its lending, and via the propagation and amplification of financial shocks. We argue that NRFR â" being systemic, endogenous and subject to âradical uncertaintyâ â" cannot be sufficiently managed through âmarket- fixingâ approaches based on information disclosure and quantitative risk estimates. Instead, we propose that financial authorities utilise a âprecautionary policy approachâ, making greater use of qualitative methods of managing risk, to support a controlled regime shift towards more sustainable capital allocation. A starting point would be the identification and exclusion of clearly unsustainable activities (e.g. deforestation), the financing of which should be discouraged via micro- and macro-prudential policy tools. Monetary policy tools, such as asset purchase programmes and collateral operations, as well as central banksâ own funds, should exclude assets linked to such activities.
SSRN
Central banks unexpectedly tightening policy rates often observe the exchange value of their currency depreciate, rather than appreciate as predicted by standard models. We document this for Fed and ECB policy days using event studies and ask whether an information effect, where the public attributes the policy surprise to an unobserved state of the economy that the central bank is signaling by its policy may explain the abnormality. It turns out that many informational assumptions make a standard two- country New Keynesian model match this behavior. To identify the particular mechanism, we condition on multiple asset prices in the event study and model implications for these. We find that there is heterogeneity in this dimension in the event study and no model with a single regime can match the evidence. Further, even after conditioning on possible information effects driving longer term interest rates, there appear to be other drivers of exchange rates. Our results show that existing models have a long way to go in reconciling event study analysis with model-based mechanisms of asset pricing.
arXiv
This paper presents a tractable model of non-linear dynamics of market returns using a Langevin approach.Due to non-linearity of an interaction potential, the model admits regimes of both small and large return fluctuations. Langevin dynamics are mapped onto an equivalent quantum mechanical (QM) system. Borrowing ideas from supersymmetric quantum mechanics (SUSY QM), we use a parameterized ground state wave function (WF) of this QM system as a direct input to the model, which also fixes a non-linear Langevin potential. A stationary distribution of the original Langevin model is given by the square of this WF, and thus is also a direct input to the model. Using a two-component Gaussian mixture as a ground state WF with an asymmetric double well potential produces a tractable low-parametric model with interpretable parameters, referred to as the NES (Non-Equilibrium Skew) model. Supersymmetry (SUSY) is then used to find time-dependent solutions of the model in an analytically tractable way. The model produces time-varying variance, skewness and kurtosis of market returns, whose time variability can be linked to probabilities of crisis-like events. For option pricing out of equilibrium, the NES model offers a closed-form approximation by a mixture of three Black-Scholes prices, which can be calibrated to index options data and used to predict moments of future returns. The NES model is shown to be able to describe both regimes of a benign market and a market in a crisis or a severe distress.
arXiv
In this study we arrive at a closed form expression for measuring vector assortativity in networks motivated by our use-case which is to observe patterns of social mobility in a society. Based on existing works on social mobility within economics literature, and social reproduction within sociology literature, we motivate the construction of an occupational network structure to observe mobility patterns. Basing on existing literature, over this structure, we define mobility as assortativity of occupations attributed by the representation of categories such as gender, geography or social groups. We compare the results from our vector assortativity measure and averaged scalar assortativity in the Indian context, relying on NSSO 68th round on employment and unemployment. Our findings indicate that the trends indicated by our vector assortativity measure is very similar to what is indicated by the averaged scalar assortativity index. We discuss some implications of this work and suggest future directions.
SSRN
The Black and Scholes call function is widely used for pricing and hedging. In this paper we present a new global approximating formula for the Black and Scholes call function that can be useful for deriving the risk of options i.e. the implied volatility. Lastly we compare, by numerical test, our results with some popular methods available in literature (which are generally local) and we show, through Monte Carlo analysis, the computation error for extreme cases of both volatility and moneyness.
arXiv
Graphical models are a powerful tool to estimate a high-dimensional inverse covariance (precision) matrix, which has been applied for portfolio allocation problem. The assumption made by these models is a sparsity of the precision matrix. However, when the stock returns are driven by the common factors, this assumption does not hold. Our paper develops a framework for estimating a high-dimensional precision matrix which combines the benefits of exploring the factor structure of the stock returns and the sparsity of the precision matrix of the factor-adjusted returns. The proposed algorithm is called Factor Graphical Lasso (FGL). We study a high-dimensional portfolio allocation problem when the asset returns admit the approximate factor model. In high dimensions, when the number of assets is large relative to the sample size, the sample covariance matrix of the excess returns is subject to the large estimation uncertainty, which leads to unstable solutions for portfolio weights. To resolve this issue, we consider the decomposition of low-rank and sparse components. This strategy allows us to consistently estimate the optimal portfolio in high dimensions, even when the covariance matrix is ill-behaved. We establish consistency of the portfolio weights in a high-dimensional setting without assuming sparsity on the covariance or precision matrix of stock returns. Our theoretical results and simulations demonstrate that FGL is robust to heavy-tailed distributions, which makes our method suitable for financial applications. The empirical application uses daily and monthly data for the constituents of the S&P500 to demonstrate superior performance of FGL compared to the equal-weighted portfolio, index and some prominent precision and covariance-based estimators.
SSRN
This paper studies the performance of the portfolios based on the Hierarchical Equal Risk Contribution algorithm in China stock market. Specifically, we consider a variety of risk measures for calculating weight allocations which include equal weighting, variance, standard deviation, expected shortfall and conditional draw-down risk and four types of linkage criteria used for agglomerative clustering, namely, single, complete, average, and Ward linkages. We compare the performance of the portfolios based on the HERC algorithm to the equal-weighted and inverse-variance portfolios. We find that most HERC portfolios are not able to beat the equal-weighted and inverse-variance portfolios in terms of several comparison measures and HERC with Ward-linkage seems to dominate the ones with other linkages. However, the results do not show that any risk measures can beat other measures consistently.
arXiv
This paper studies politically feasible policy solutions to inequities in local public goods provision. I focus in particular on the entwined issues of high property taxes, geographic income disparities, and inequalities in public education prevalent in the United States. It has long been recognized that with a mobile population, local administration and funding of schools leads to competition between districts. By accounting for heterogeneity in incomes and home qualities, I am able to shed new light on this phenomenon, and make novel policy recommendations. I characterize the equilibrium in a dynamic general equilibrium model of location choice and education investment with a competitive housing market, heterogeneous wealth levels and home qualities, and strategic district governments. When all homes are owner-occupied, I show that competition between strategic districts leads to over-taxation in an attempt to attract wealthier residents. A simple class of policies that cap and/or tax the expenditure of richer districts are Pareto improving, and thus politically feasible. These policies reduce inequality in access to education while increasing expenditure for under-funded schools. Gains are driven by mitigation of the negative externalities generated by excessive spending among wealthier districts. I also discuss the policy implications of the degree of homeownership. The model sheds new light on observed patterns of homeownership, location choice, and income. Finally, I test the assumptions and implications empirically using a regression discontinuity design and data on property tax referenda in Massachusetts.
arXiv
Inspired by the developments in deep generative models, we propose a model-based RL approach, coined Reinforced Deep Markov Model (RDMM), designed to integrate desirable properties of a reinforcement learning algorithm acting as an automatic trading system. The network architecture allows for the possibility that market dynamics are partially visible and are potentially modified by the agent's actions. The RDMM filters incomplete and noisy data, to create better-behaved input data for RL planning. The policy search optimisation also properly accounts for state uncertainty. Due to the complexity of the RKDF model architecture, we performed ablation studies to understand the contributions of individual components of the approach better. To test the financial performance of the RDMM we implement policies using variants of Q-Learning, DynaQ-ARIMA and DynaQ-LSTM algorithms. The experiments show that the RDMM is data-efficient and provides financial gains compared to the benchmarks in the optimal execution problem. The performance improvement becomes more pronounced when price dynamics are more complex, and this has been demonstrated using real data sets from the limit order book of Facebook, Intel, Vodafone and Microsoft.
SSRN
This chapter introduces the Research Handbook on Comparative Corporate Governance and surveys several of the central themes addressed in the book. Most corporate governance research deals with the interaction between board members, officers, and shareholders, primarily in large, publicly traded corporations. Considerable volumes of literature thus are preoccupied with reducing conflicts of interest between shareholders and management, and consequently minimizing agency cost, thus vindicating the narrow finance perspective. Given the predominance of controlling shareholders around the globe, the literature increasingly focuses acutely on conflicts between controlling and minority shareholders. In a comparative or international context, research also often addresses all groups whose interests are affected by corporate activities and who have some degree of influence on corporations, such as creditors and employees. The book attempts to take a broad perspective. It deals with interactions between boards and shareholders, as well as minority and controlling shareholders. We cover legal duties and their enforcement, as well as the balance of powers generated by the institutional setup. Nevertheless, the interests of other âstakeholdersâ are very much present. The authors also explore corporate purpose, including short-termism, corporate social responsibility (CSR) and environmental, social and governance (ESG) issues. In addition, the book tackles key debates in the field, including the significance of legal origins for the development of corporate law, convergence in corporate governance, and the appropriate choice of research methods in comparative corporate governance scholarship.
arXiv
The electricity market, which was initially designed for dispatchable power plants and inflexible demand, is being increasingly challenged by new trends, such as the high penetration of intermittent renewables and the transformation of the consumers energy space. To accommodate these new trends and improve the performance of the market, several modifications to current market designs have been proposed in the literature. Given the vast variety of these proposals, this paper provides a comprehensive investigation of the modifications proposed in the literature as well as a detailed assessment of their suitability for improving market performance under the continuously evolving electricity landscape. To this end, first, a set of criteria for an ideal market design is proposed, and the barriers present in current market designs hindering the fulfillment of these criteria are identified. Then, the different market solutions proposed in the literature, which could potentially mitigate these barriers, are extensively explored. Finally, a taxonomy of the proposed solutions is presented, highlighting the barriers addressed by each proposal and the associated implementation challenges. The outcomes of this analysis show that even though each barrier is addressed by at least one proposed solution, no single proposal is able to address all the barriers simultaneously. In this regard, a future-proof market design must combine different elements of proposed solutions to comprehensively mitigate market barriers and overcome the identified implementation challenges. Thus, by thoroughly reviewing this rich body of literature, this paper introduces key contributions enabling the advancement of the state-of-the-art towards increasingly efficient electricity market.
SSRN
Intuitively, the model prediction signs matter a lot in finance, especially for investment strategy constructions. This paper proposes an approach in which the loss function regularizes the errors in prediction in different ways. In particular, the loss function considers simultaneously errors in prediction signs and the sizes and signs of the residuals in the model prediction. Less weight is given to the residuals with correct prediction signs but more weight is assigned to the residuals with wrong prediction signs. This is important because agents make decisions according to model predictions, especially the signs of the predictions. Simultaneously, the residuals of larger size are also penalized more and the ones of smaller size are penalized less. Also, the signs of the residuals are considered in the loss function because they also affect decision making processes. For these reasons, training models by weights varying with the correctness of the prediction signs and the sizes and signs of the residuals is significant for decision making. This paper proposes a new approach termed as Sign regression which takes into account of these considerations. The simulation results show that Sign regression consistently performs better than the ordinary least squares method and least absolute deviations method out-of-sample. An application on Fama and French three factor model also shows good performance of Sign regression.
SSRN
When the coronavirus crisis passes weâll need startups to fuel our recovery. But they wonât survive without urgent government action.
SSRN
We quantify the role of global production linkages in explaining spillovers of U.S. monetary policy shocks to stock returns of fifty-four sectors in twenty-six countries. We first present a conceptual framework based on a standard open-economy production network model that delivers a spillover pattern consistent with a spatial autoregression (SAR) process. We then use the SAR model to decompose the overall impact of U.S. monetary policy on stock returns into a direct and a network effect. We find that up to 80 percent of the total impact of U.S. monetary policy shocks on average country-sector stock returns is due to the network effect of global production linkages. We further show that U.S. monetary policy shocks have a direct impact predominantly on U.S. sectors and then propagate to the rest of the world through the global production network. Our results are robust to controlling for correlates of the global financial cycle, foreign monetary policy shocks, and to changes in variable definitions and empirical specifications.
arXiv
This work proposes a supervised multi-channel time-series learning framework for financial stock trading. Although many deep learning models have recently been proposed in this domain, most of them treat the stock trading time-series data as 2-D image data, whereas its true nature is 1-D time-series data. Since the stock trading systems are multi-channel data, many existing techniques treating them as 1-D time-series data are not suggestive of any technique to effectively fusion the information carried by the multiple channels. To contribute towards both of these shortcomings, we propose an end-to-end supervised learning framework inspired by the previously established (unsupervised) convolution transform learning framework. Our approach consists of processing the data channels through separate 1-D convolution layers, then fusing the outputs with a series of fully-connected layers, and finally applying a softmax classification layer. The peculiarity of our framework - SuperDeConFuse (SDCF), is that we remove the nonlinear activation located between the multi-channel convolution layers and the fully-connected layers, as well as the one located between the latter and the output layer. We compensate for this removal by introducing a suitable regularization on the aforementioned layer outputs and filters during the training phase. Specifically, we apply a logarithm determinant regularization on the layer filters to break symmetry and force diversity in the learnt transforms, whereas we enforce the non-negativity constraint on the layer outputs to mitigate the issue of dead neurons. This results in the effective learning of a richer set of features and filters with respect to a standard convolutional neural network. Numerical experiments confirm that the proposed model yields considerably better results than state-of-the-art deep learning techniques for real-world problem of stock trading.
arXiv
As more tech companies engage in rigorous economic analyses, we are confronted with a data problem: in-house papers cannot be replicated due to use of sensitive, proprietary, or private data. Readers are left to assume that the obscured true data (e.g., internal Google information) indeed produced the results given, or they must seek out comparable public-facing data (e.g., Google Trends) that yield similar results. One way to ameliorate this reproducibility issue is to have researchers release synthetic datasets based on their true data; this allows external parties to replicate an internal researcher's methodology. In this brief overview, we explore synthetic data generation at a high level for economic analyses.
arXiv
This study introduces a new technique to recover the implicit discount factor in the derivative market using only European put and call prices: this discount is grounded in actual transactions in active markets. Moreover, this study identifies the implied cost of funding, over OIS, of major market players. Does a liquid equity market allow arbitrage? The key idea is that the (unique) forward contract -- built using the put-call parity relation -- contains information about the market discount factor: by no-arbitrage conditions we identify the implicit interest rate such that the forward contract value does not depend on the strike. The procedure is applied to options on S&P 500 and EURO STOXX 50 indices. There is statistical evidence that, in the EURO STOXX 50 market, the implicit interest rate curve coincides with the EUR OIS one, while, in the S&P 500 market, a cost of funding of, on average, 34 basis points is added on top of the USD OIS curve.
SSRN
We introduce the concept of a financial stability real interest rate using a macroeconomic banking model with an occasionally binding financing constraint, as in Gertler and Kiyotaki (2010). The financial stability interest rate, r**, is the threshold interest rate that triggers the constraint being binding. Increasing imbalances in the financial sector, measured by an increase in leverage, are accompanied by a lower threshold that could trigger financial instability events. We also construct a theoretical implied financial conditions index and show how it is related to the gap between the natural and financial stability interest rates.
arXiv
Can an asset manager plan the optimal timing for her/his hedging strategies given market conditions? The standard approach based on Markowitz or other more or less sophisticated financial rules aims to find the best portfolio allocation thanks to forecasted expected returns and risk but fails to fully relate market conditions to hedging strategies decision. In contrast, Deep Reinforcement Learning (DRL) can tackle this challenge by creating a dynamic dependency between market information and hedging strategies allocation decisions. In this paper, we present a realistic and augmented DRL framework that: (i) uses additional contextual information to decide an action, (ii) has a one period lag between observations and actions to account for one day lag turnover of common asset managers to rebalance their hedge, (iii) is fully tested in terms of stability and robustness thanks to a repetitive train test method called anchored walk forward training, similar in spirit to k fold cross validation for time series and (iv) allows managing leverage of our hedging strategy. Our experiment for an augmented asset manager interested in sizing and timing his hedges shows that our approach achieves superior returns and lower risk.
arXiv
The purpose of this paper is to test the multi-factor beta model implied by the generalized arbitrage pricing theory (APT) and the Adaptive Multi-Factor (AMF) model with the Groupwise Interpretable Basis Selection (GIBS) algorithm, without imposing the exogenous assumption of constant betas. The intercept (arbitrage) tests validate both the AMF and the Fama-French 5-factor (FF5) model. We do the time-invariance tests for the betas for both the AMF model and the FF5 in various time periods. We show that for nearly all time periods with length less than 6 years, the beta coefficients are time-invariant for the AMF model, but not the FF5 model. The beta coefficients are time-varying for both AMF and FF5 models for longer time periods. Therefore, using the dynamic AMF model with a decent rolling window (such as 5 years) is more powerful and stable than the FF5 model.
SSRN
In May 2013, TransDigm, a company that manufactured a wide range of highly engineered aerospace parts for both military and civilian aircraft, announced it was buying Arkwin Industries for $286 million in cash (3 times Arkwin's sales of $91 million). Having acquired more than 40 companies in the past 20 years, TransDigm was an experienced acquirer with a unique business model focused exclusively on value creation. This case describes TransDigm's acquisition, integration, and talent development processes as well as the changes TransDigm implemented at Arkwin in the first three years after the acquisition. It serves as a complement to the TransDigm in 2017 case (HBS case #720-422). Whereas the TransDigm in 2017 case provides an overview of the company, its history, its value creation strategy, and its incredible financial performance, the Arkwin Industries case provides a deep dive into a single transaction as a way to illustrate TransDigm's value creation strategy in practice. That strategy consists of three parts--value-based pricing, cost reductions, and new product development--and has the goal of doubling a target firm's operating margins and cash flows within five years.This case allows students to understand TransDigm's unique business model in greater detail and to explore how it creates value and for whom. The case focuses primarily on one aspect of TransDigm's value creation strategy (value-based pricing) to explain how TransDigm is consistently able to generate operating margins that are 100% larger than similar firms in its industry. Data in the case allow students to discuss the concept of limit pricing and whether it may or may not work in this setting. To a lesser extent, the case focuses on the firm's integration and talent development process, and raises the critical question of whether TransDigm can continue to grow revenue and adjusted EBITDA at more than 20% per year indefinitely.
arXiv
In this paper we introduce a new concept for modelling electricity prices through the introduction of an unobservable intrinsic electricity price $p(\tau)$. We use it to connect the classical theory of storage with the concept of a risk premium. We derive prices for all common contracts such as the intraday spot price, the day-ahead spot price, and futures prices. Finally, we propose an explicit model from the class of structural models and conduct an empirical analysis, where we find an overall negative risk premium.
arXiv
We consider a model-independent pricing problem in a fixed-income market and show that it leads to a weak optimal transport problem as introduced by Gozlan et al. We use this to characterize the extremal models for the pricing of caplets on the spot rate and to establish a first robust super-replication result that is applicable to fixed-income markets.
Notably, the weak transport problem exhibits a cost function which is non-convex and thus not covered by the standard assumptions of the theory. In an independent section, we establish that weak transport problems for general costs can be reduced to equivalent problems that do satisfy the convexity assumption, extending the scope of weak transport theory. This part could be of its own interest independent of our financial application, and is accessible to readers who are not familiar with mathematical finance notations.
SSRN
In 1910 Richard D. Wyckoff proposed that any professionally traded market moved in what he termed âPrice Cyclesâ and that these cycles could be predictably and reliably navigated to produce consistent profit. According to Wyckoff, these cycles were signified by imbalances in supply and demand which can be identified by analyzing price action, volume and time. Wyckoff theorized that the major imbalances in a market are created primarily by large, sometimes institutional investors, whom he termed the âComposite Operator (or Man)â, not by a natural ebb and flow in the activities of small retail traders. The idea that the markets are professionally manipulated and large movements were always the results of these actions was a significant contribution to the world of investing by Wyckoff, â...all the fluctuations in the market and in all the various stocks should be studied as if they were the result of one manâs operations. Let us call him the Composite Man, who, in theory, sits behind the scenes and manipulates the stocks to your disadvantage if you do not understand the game as he plays it; and to your great profit if you do understand it.â (The Richard D. Wyckoff Course in Stock Market Science and Technique, section 9, p. 1-2). The main cycle illustrated by Wyckoff begins with an âaccumulation phaseâ wherein the Composite Operator (CO) accumulates large amounts of shares in a stock, spreading their stock buys over time within a relatively narrow range of prices. Once the accumulation is complete, the CO marks up the price far beyond the previous range; ending with a âdistribution phaseâ wherein the CO sells the previously accumulated shares at higher prices and spread over a narrow range of prices to the mass of retail traders whoâve now been enticed by the large rise in prices to jump into the market and buy indiscriminately. My hypothesis is not only that the same cycles and methods of price manipulation exist in the Cryptocurrency market but that there are indications of Composite Operator activity in the form of climactic or excessive price and volume changes which can be algorithmically isolated and traded to profit from the effect of these climactic changes on prices at an abnormal rate of return.
arXiv
Before the 2008 financial crisis, most research in financial mathematics focused on pricing options without considering the effects of counterparties' defaults, illiquidity problems, and the role of the sale and repurchase agreement (Repo) market. Recently, models were proposed to address this by computing a total valuation adjustment (XVA) of derivatives; however without considering a potential crisis in the market. In this article, we include a possible crisis by using an alternating renewal process to describe the switching between a normal financial regime and a financial crisis. We develop a framework to price the XVA of a European claim in this state-dependent situation. The price is characterized as a solution to a backward stochastic differential equation (BSDE), and we prove the existence and uniqueness of this solution. In a numerical study based on a deep learning algorithm for BSDEs, we compare the effect of different parameters on the valuation of the XVA.