Research articles for the 2019-03-18

A Census of the Factor Zoo
Harvey, Campbell R.,Liu, Yan
SSRN
The rate of factor production in the academic research is out of control. We document over 400 factors published in top journals. Surely, many of them are false. We explore the incentives that lead to factor mining and explore reasons why many of the factors are simply lucky findings. The backtested results published in academic outlets are routinely cited to support commercial products. As a consequence, investors develop exaggerated expectations based on inflated backtested results and are then disappointed by the live trading experience. We provide a comprehensive census of factors published in top academic journals through January 2019. We also offer a link to a Google sheet that has detailed information on each factor, including citation information and download links. Finally, we propose a citizen science project that allows researchers to add to our database both published papers as well as working papers. Here is the updated analysis:

A Political Capital Asset Pricing Model
Pagliardi, Giovanni,Poncet, Patrice,Zenios, Stavros A.
SSRN
We construct a bivariate factor of political stability and economic policy confidence, and show that it commands a significant premium of up to 15\% per annum, in the global, developed, and emerging markets, robust to ICAPM, Fama-French five-factor model, Carhart, and ICAPM Redux. We propose an international capital asset pricing model incorporating the political factor, and test global and local estimations in developed and emerging economies. The model explains up to 77\% of cross-sectional returns, has good predictive power, in several tests it performs better than the benchmark models in pricing equity indices and explains up to an incremental 25\% of cross-sectional returns, and is robust out of sample.

A Toolkit for Factor-Mimicking Portfolios
Pukthuanthong, Kuntara,Roll, Richard,Wang, Junbo L.,Zhang, Tengfei
SSRN
We propose enhanced necessary criteria to select Factor-Mimicking Portfolios (FMPs) that genuinely present a true risk factor. Ideally, FMPs should (a) be correlated with underlying factors, (b) be related to the systematic risk in asset returns, (c) explain the cross-sectional of mean returns, and (d) be robust to the included assets. Existing methods do not satisfy these criteria and are exposed to several econometric difficulties such as errors-in-variables bias. We study the improvements based on the Instrumental Variables method (IV) and Stein’s shrinkage method. The IV approach leads to nearly unbiased risk premium estimation in simulations, while other methods have large biases. We find that FMPs constructed with IV satisfy the above criteria for equities when mimicking consumption growth, inflation, and the unemployment rate and for corporate bonds when mimicking consumption growth, industrial production, and the default spread.

A fast method for pricing American options under the variance gamma model
Weilong Fu,Ali Hirsa
arXiv

We investigate methods for pricing American options under the variance gamma model. The variance gamma process is a pure jump process which is constructed by replacing the calendar time by the gamma time in a Brownian motion with drift, which makes it a time-changed Brownian motion. In general, the finite difference method and the simulation method can be used for pricing under this model, but their speed is not satisfactory. So there is a need for fast but accurate approximation methods. In the case of Black-Merton-Scholes model, there are fast approximation methods, but they cannot be utilized for the variance gamma model. We develop a new fast method inspired by the quadratic approximation method, while reducing the error by making use of a machine learning technique on pre-calculated quantities. We compare the performance of our proposed method with those of the existing methods and show that this method is efficient and accurate for practical use.



Active and Passive Portfolio Management with Latent Factors
Ali Al-Aradi,Sebastian Jaimungal
arXiv

We address a portfolio selection problem that combines active (outperformance) and passive (tracking) objectives using techniques from convex analysis. We assume a general semimartingale market model where the assets' growth rate processes are driven by a latent factor. Using techniques from convex analysis we obtain a closed-form solution for the optimal portfolio and provide a theorem establishing its uniqueness. The motivation for incorporating latent factors is to achieve improved growth rate estimation, an otherwise notoriously difficult task. To this end, we focus on a model where growth rates are driven by an unobservable Markov chain. The solution in this case requires a filtering step to obtain posterior probabilities for the state of the Markov chain from asset price information, which are subsequently used to find the optimal allocation. We show the optimal strategy is the posterior average of the optimal strategies the investor would have held in each state assuming the Markov chain remains in that state. Finally, we implement a number of historical backtests to demonstrate the performance of the optimal portfolio.



Affine term structure models : a time-changed approach with perfect fit to market curves
Cheikh Mbaye,Frédéric Vrins
arXiv

We address the so-called calibration problem which consists of fitting in a tractable way a given model to a specified term structure like, e.g., yield or default probability curves. Time-homogeneous jump-diffusions like Vasicek or Cox-Ingersoll-Ross (possibly coupled with compounded Poisson jumps, JCIR), are tractable processes but have limited flexibility; they fail to replicate actual market curves. The deterministic shift extension of the latter (Hull-White or JCIR++) is a simple but yet efficient solution that is widely used by both academics and practitioners. However, the shift approach is often not appropriate when positivity is required, which is a common constraint when dealing with credit spreads or default intensities. In this paper, we tackle this problem by adopting a time change approach. On the top of providing an elegant solution to the calibration problem under positivity constraint, our model features additional interesting properties in terms of implied volatilities. It is compared to the shift extension on various credit risk applications such as credit default swap, credit default swaption and credit valuation adjustment under wrong-way risk. The time change approach is able to generate much larger volatility and covariance effects under the positivity constraint. Our model offers an appealing alternative to the shift in such cases.



An Intraday Trend-Following Trading Strategy on Equity Derivatives in India
Bhandari, Nishit,Chakravorty, Gaurav
SSRN
In this article, we will present a trend-following based investment strategy on single-stock futures. Using the price movement of the recent past we are able to achieve a Sharpe Ratio of 1.7 on training data by cascading positions on successive positive signals and closing out positions if we hit a stop-loss. The stop-loss is computed using historical volatility. The universe is defined in an unbiased fashion to eliminate overfitting. We have used the top 75% most active single stock futures contracts and we have further filtered out products with low opening 15-minute volume. This was done to improve the scalability of the strategy. This might also help in avoiding contracts where less volatility is expected. We have trained our strategy on historical data from 2012 to 2016 and we have tested the strategy on the data from 2017 to 2018 data. Given the nature of markets in 2017-18, we have also looked at modifying the strategy to take positions more conservatively to avoid volatile situations. Empirical studies on profitable trading strategies are rare. We endeavor to shed light on the process of development of a profitable intraday trading strategy and we hope that this encourages collaboration in the active trading community.

Backtesting Volatility Assumptions using Overlapping Observations
Clayton, Michael A.
SSRN
In this work we examine the ability of a variety of backtest experiments: overlapping and non-overlapping, with a variety of backtest horizons out to a year, and test statistics: Kolmogorov-Smirnov, Anderson-Darling and a Likelihood ratio statistic, to identify a model with misspecified volatility. In order to do so we first define a framework for measuring the 'discriminatory power' of a test, which allows us to quantitatively rank tests by their ability to identify a particular model defect/misspecification. We illustrate this using a normal model with misspecified volatility and show that the a Likelihood Ratio test has much more power than standard distributional tests (Kolmogorov-Smirnov and Anderson-Darling) to identify a misspecified model.Using this framework, we then show that test statistics that are adjusted for the correlation structure arising from the overlapping return observations are more powerful than their unadjusted versions when performing overlapping backtesting experiments. However, a similarly adjusted version of the Likelihood Ratio test statistic is materially more powerful than adjusted versions of the distributional tests.These adjusted test statistics are shown to have comparable discriminatory power to the (non-overlapping) 1-day backtest experiments, whereas overlapping experiments with the unadjusted statistics have a discriminatory power that rapidly deteriorates with increasing overlap.

Corporate Pension Plan Funding Levels and Pension Assumptions
Michaelides, Alexander,Papakyriakou, Panayiotis,Milidonis, Andreas
SSRN
We use a difference-in-differences approach to examine the causal impact of the funding ratios of U.S. corporate defined benefit (DB) pension plans on the assumption of expected return on pension assets (EROA). To make the causal case, we use the 2008 global financial crisis as an exogenous shock to the funding ratio of DB pension plans, and the simultaneous implementation of the Pension Protection Act, which emphasized the accountability of underfunded pension plans. We find that DB pension plans making the transition from fully funded to underfunded status over this period significantly revise their EROA assumption upward. The upward revisions in EROA are economically significant and generate obligation-reducing outcomes for corporate plans sponsors: a switch from fully funded to underfunded status generates at least a 40 (and up to a 80) basis point increase in EROA, which, in turn, corresponds to an average annual reduction in pension contributions of $6 (to $11) million.

Costly State Verification and Truthtelling: A Note on the Theory of Debt Contracts
Schosser, Josef,Wilhelm, Jochen
SSRN
When firms want to raise external financing, why do they resort to contracts with fixed repayment, i.e., standard debt contracts? The canonical work of Gale and Hellwig (Rev Econ Stud, 52(4):647â€"663, 1985) gives the following answer to this question: Assuming that only the entrepreneur can observe the project’s outcome free of charge, the standard debt contract proves to be an incentive-compatible financing design. However, this approach remains inadequate, as neither the lender nor the borrower is given the possibility to act strategically. The paper at hand takes up this aspect. By means of a simple game-theoretic model and focusing on a binary outcome setting, it is shown that every risky standard debt contract is dominated by at least one ownership contract. In this respect, costly state verification cannot act as a raison d’être of contracts with fixed repayment.

Data-driven Neural Architecture Learning For Financial Time-series Forecasting
Dat Thanh Tran,Juho Kanniainen,Moncef Gabbouj,Alexandros Iosifidis
arXiv

Forecasting based on financial time-series is a challenging task since most real-world data exhibits nonstationary property and nonlinear dependencies. In addition, different data modalities often embed different nonlinear relationships which are difficult to capture by human-designed models. To tackle the supervised learning task in financial time-series prediction, we propose the application of a recently formulated algorithm that adaptively learns a mapping function, realized by a heterogeneous neural architecture composing of Generalized Operational Perceptron, given a set of labeled data. With a modified objective function, the proposed algorithm can accommodate the frequently observed imbalanced data distribution problem. Experiments on a large-scale Limit Order Book dataset demonstrate that the proposed algorithm outperforms related algorithms, including tensor-based methods which have access to a broader set of input information.



DeepTriangle: A Deep Learning Approach to Loss Reserving
Kevin Kuo
arXiv

We propose a novel approach for loss reserving based on deep neural networks. The approach allows for joint modeling of paid losses and claims outstanding, and incorporation of heterogeneous inputs. We validate the models on loss reserving data across lines of business, and show that they improve on the predictive accuracy of existing stochastic methods. The models require minimal feature engineering and expert input, and can be automated to produce forecasts more frequently than manual workflows.



Derivative of a Conic Problem with a Unique Solution
Enzo Busseti,Walaa M. Moursi
arXiv

We view a conic optimization problem that has a unique solution as a map from its data to its solution. If sufficient regularity conditions hold at a solution point, namely that the implicit function theorem applies to the normalized residual function of [Busseti et al., 2018], the problem solution map is differentiable. We obtain the derivative, in the form of an abstract linear operator. This applies to any convex optimization problem in conic form, while a previous result [Amos et al., 2016] studied strictly convex quadratic programs. Such differentiable problems can be used, for example, in machine learning, control, and related areas, as a layer in an end-to-end learning and control procedure, for backpropagation. We accompany this note with a lightweight Python implementation which can handle problems with the cone constraints commonly used in practice.



Forecasting the Impact of Connected and Automated Vehicles on Energy Use: A Microeconomic Study of Induced Travel and Energy Rebound
Morteza Taiebat,Samuel Stolper,Ming Xu
arXiv

Connected and automated vehicles (CAVs) are expected to yield significant improvements in safety, energy efficiency, and time utilization. However, their net effect on energy and environmental outcomes is unclear. Higher fuel economy reduces the energy required per mile of travel, but it also reduces the fuel cost of travel, incentivizing more travel and causing an energy "rebound effect." Moreover, CAVs are predicted to vastly reduce the time cost of travel, inducing further increases in travel and energy use. In this paper, we forecast the induced travel and rebound from CAVs using data on existing travel behavior. We develop a microeconomic model of vehicle miles traveled (VMT) choice under income and time constraints; then we use it to estimate elasticities of VMT demand with respect to fuel and time costs, with fuel cost data from the 2017 United States National Household Travel Survey (NHTS) and wage-derived predictions of travel time cost. Our central estimate of the combined price elasticity of VMT demand is -0.4, which differs substantially from previous estimates. We also find evidence that wealthier households have more elastic demand, and that households at all income levels are more sensitive to time costs than to fuel costs. We use our estimated elasticities to simulate VMT and energy use impacts of full, private CAV adoption under a range of possible changes to the fuel and time costs of travel. We forecast a 2-47% increase in travel demand for an average household. Our results indicate that backfire - i.e., a net rise in energy use - is a possibility, especially in higher income groups. This presents a stiff challenge to policy goals for reductions in not only energy use but also traffic congestion and local and global air pollution, as CAV use increases.



How ‘Global’ are Investment Banks? An Analysis of Investment Banking Networks in Asian Equity Capital Markets
Gemici, Kurtulus,Lai, Karen
SSRN
This paper examines the distribution of power within financial networks of investment banks in equity capital markets (ECMs) of three key economies in Asia â€" Hong Kong, Japan, and Singapore. Using social network analysis, it shows that while bulge-bracket banks occupy core positions in all three locations, their dominance is challenged by emerging Asian investment banks. The ECM networks of investment banks are strongly shaped by development trajectory and regional contexts of specific IFCs, which reveals the differentiated nature of finance across Asia. Results also highlight the need for further research on networks within financial centres in addition to inter-city networks to understand the roles and development of IFCs.

Market Making under a Weakly Consistent Limit Order Book Model
Baron Law,Frederi Viens
arXiv

We develop from the ground up a new market-making model tailor-made for high-frequency trading under a limit order book (LOB), based on the well-known classification of order types in market microstructure. Our flexible framework allows arbitrary volume, jump, and spread distributions as well as the use of market orders. It also honors the consistency of price movements upon arrivals of different order types (e.g. price never goes down on buy market order) in addition to respecting the price-time priority of LOB. In contrast to the approach of regular control on diffusion as in the classical Avellaneda and Stoikov market-making framework, we exploit the techniques of optimal switching and impulse control on marked point processes, which have proven to be very effective in modeling the order-book features. The Hamilton-Jacobi-Bellman quasi-variational inequality (HJBQVI) associated with the control problem can be solved numerically via the finite-difference method. We illustrate our optimal trading strategy with a full numerical analysis, calibrated to the order-book statistics of a popular ETF. Our simulation shows that the profit of market-making can be seriously overstated under LOBs with inconsistent price movements.



Momentum in the Indian Equity Markets: Positive Convexity and Positive Alpha
Chakravorty, Gaurav,Srivastava, Sonam,Singhal, Mansi
SSRN
We present effective momentum strategies over the liquid equity futures market in India. We evaluate and determine the persistence of the returns at various look-backs ranging from quarterly and weekly to more granular daily look-backs. We look at a universe of around 100 liquid equity futures traded across the Indian derivatives markets to evaluate this anomaly. We evaluate momentum across the two well-known themes - time series momentum or absolute momentum and cross-sectional momentum or relative momentum. We demonstrate that at the optimal horizon Indian momentum strategies can be a source of uncorrelated alpha to an international portfolio. We use risk-budgeting at a given target risk for portfolio construction. We will show in a separate publication how it outperforms mean-variance optimization.

On SDEs with Lipschitz coefficients, driven by continuous, model-free price paths
Lesiba Ch. Galane,Rafał M. Łochowski,Farai J. Mhlanga
arXiv

Using similar assumptions as in Revuz and Yor's book we prove the existence and uniqueness of the solutions of SDEs with Lipschitz coefficients, driven by continuous, model-free price paths. The main tool in our reasonings is a model-free version of the Burkholder-Davis-Gundy inequality for integrals driven by model-free, continuous price paths.



Paying for Market Liquidity: Competition and Incentives
Bellia, Mario,Pelizzon, Loriana,Subrahmanyam, Marti G.,Uno, Jun,Yuferova, Darya
SSRN
Do competition and incentives offered to designated market makers (DMMs) improve market liquidity? Using data from NYSE Euronext Paris, we show that an exogenous increase in competition among DMMs leads to a significant decrease in quoted and effective spreads, mainly through a reduction in adverse selection costs. In contrast, changes in incentives, through small changes in rebates and requirements for DMMs, do not have any tangible effect on market liquidity. Our results are of relevance for designing optimal contracts between exchanges and DMMs and for regulatory market oversight.

Private Contracting, Law and Finance
Acheson, Graeme,Campbell, Gareth,Turner, John D.
SSRN
In the late nineteenth century Britain had almost no mandatory shareholder protections, but had very developed financial markets. We argue that private contracting between shareholders and corporations meant that the absence of statutory protections was immaterial. Using circa 500 articles of association from before 1900, we code the protections offered to shareholders in these private contracts. We find that firms voluntarily offered shareholders many of the protections which were subsequently included in statutory corporate law. We also find that companies offering better protection to shareholders had less concentrated ownership.

Reputation and Investor Activism: A Structural Approach
Johnson, Travis L.,Swem, Nathan
SSRN
We measure the impact of reputation for proxy fighting on investor activism by estimating a dynamic model in which activists engage a sequence of target firms. Our estimation produces an evolving reputation measure for each activist and quantifies its impact on campaign frequency and outcomes. We find that high reputation activists initiate 3.5 times as many campaigns and extract 85% more settlements from targets, and that reputation-building incentives explain 20% of campaign initiations and 19% of proxy fights. Our estimates indicate these reputation effects combine to nearly double the value activism adds for target shareholders.

Semimartingale theory of monotone mean--variance portfolio allocation
Aleš Černý
arXiv

We study dynamic optimal portfolio allocation for monotone mean--variance preferences in a general semimartingale model. Armed with new results in this area we revisit the work of Cui, Li, Wang and Zhu (2012, MAFI) and fully characterize the circumstances under which one can set aside a non-negative cash flow while simultaneously improving the mean--variance efficiency of the left-over wealth. The paper analyzes, for the first time, the monotone hull of the Sharpe ratio and highlights its relevance to the problem at hand.



Time-Variation of Dual-Class Premia
Broussard, John Paul,Vaihekoski, Mika
SSRN
Dual class shares have been in existence in financial markets for more than one hundred years. One class of shares provides superior voting power, while the other class provides preferential access to economic benefits. Extant literature suggests that superior voting class shares should trade at premium over the economic shares. We revisit the dual-class share phenomenon and document the time-variation characteristics of the dual-class premium. We connect the premium to voting rights, liquidity and disproportional dividend privileges. We also document relationship between the dual-class premium and legal and institutional structures.

Valuing Equities: Discounting Growth Opportunities, Fearing Inflation
Blanken, Ronald
SSRN
Over the last century (Eretained/P)mean ~ constant while D/P has experienced a long secular decline. If investors cannot simultaneously discount both Eretained and D, examination of the relationships between Eretained/P, D/P and the payoff ratio Eretained/D offers support to the hypothesis that Eretained/P is the priced valuation ratio. A linear Empirical Model (EM) identifies two data variables, the trailing earnings growth (EG) and inflation (cpi), that explain R2 ~ 50% of the variance of Eretained/P. The investor’s decision to sell or hold the shares is governed by the investor’s resistance to Intertemporal Substitution, R = γµ, so the MF-Gordon becomes Eretained/P = γµ - µ . A dynamic Model is obtained using Taylor expansions of the functional relationships µ = µ(EG(t)) and γ = γ(cpi(t)). The Model is calibrated using the EM. The MF-Gordon Model is used to explain the quasi-equality of equities and the long bond (1970:2000), the equity/long bond equity premium and the effect of sentiment on equity prices. The Shiller price volatility puzzle is explained by the adaptive growth expectation which is heavily loaded on the highly volatile trailing earnings growth rate.