# Research articles for the 2019-02-18

arXiv

Connected and automated vehicles (CAVs) are poised to reshape transportation and mobility by replacing humans as the driver and service provider. While the primary stated motivation for vehicle automation is to improve safety and convenience of road mobility, this transformation also provides a valuable opportunity to improve vehicle energy efficiency and reduce emissions in the transportation sector. Progress in vehicle efficiency and functionality, however, does not necessarily translate to net positive environmental outcomes. Here we examine the interactions between CAV technology and the environment at four levels of increasing complexity: vehicle, transportation system, urban system, and society. We find that environmental impacts come from CAV-facilitated transformations at all four levels, rather than from CAV technology directly. We anticipate net positive environmental impacts at the vehicle, transportation system, and urban system levels, but expect greater vehicle utilization and shifts in travel patterns at the society level to offset some of these benefits. Focusing on the vehicle-level improvements associated with CAV technology is likely to yield excessively optimistic estimates of environmental benefits. Future research and policy efforts should strive to clarify the extent and possible synergetic effects from a systems level in order to envisage and address concerns regarding the short- and long-term sustainable adoption of CAV technology.

arXiv

Conditional Value-at-Risk (CVaR) and Value-at-Risk (VaR), also called the superquantile and quantile, are frequently used to characterize the tails of probability distribution's and are popular measures of risk. Buffered Probability of Exceedance (bPOE) is a recently introduced characterization of the tail which is the inverse of CVaR, much like the CDF is the inverse of the quantile. These quantities can prove very useful as the basis for a variety of risk-averse parametric engineering approaches. Their use, however, is often made difficult by the lack of well-known closed-form equations for calculating these quantities for commonly used probability distribution's. In this paper, we derive formulas for the superquantile and bPOE for a variety of common univariate probability distribution's. Besides providing a useful collection within a single reference, we use these formulas to incorporate the superquantile and bPOE into parametric procedures. In particular, we consider two: portfolio optimization and density estimation. First, when portfolio returns are assumed to follow particular distribution families, we show that finding the optimal portfolio via minimization of bPOE has advantages over superquantile minimization. We show that, given a fixed threshold, a single portfolio is the minimal bPOE portfolio for an entire class of distribution's simultaneously. Second, we apply our formulas to parametric density estimation and propose the method of superquantile's (MOS), a simple variation of the method of moment's (MM) where moment's are replaced by superquantile's at different confidence levels. With the freedom to select various combinations of confidence levels, MOS allows the user to focus the fitting procedure on different portions of the distribution, such as the tail when fitting heavy-tailed asymmetric data.

arXiv

The value of an asset in a financial market is given in terms of another asset known as numeraire. The dynamics of the value is non-stationary and hence, to quantify the relationships between different assets, one requires convenient measures such as the means and covariances of the respective log returns. Here, we develop transformation equations for these means and covariances when one changes the numeraire. The results are verified by a thorough empirical analysis capturing the dynamics of numerous assets in a foreign exchange market. We show that the partial correlations between pairs of assets are invariant under the change of the numeraire. This observable quantifies the relationship between two assets, while the influence of the rest is removed. As such the partial correlations uncover intriguing observations which may not be easily noticed in the ordinary correlation analysis.

arXiv

Groups of firms often achieve a competitive advantage through the formation of geo-industrial clusters. Although many exemplary clusters, such as Hollywood or Silicon Valley, have been frequently studied, systematic approaches to identify and analyze the hierarchical structure of the geo-industrial clusters at the global scale are rare. In this work, we use LinkedIn's employment histories of more than 500 million users over 25 years to construct a labor flow network of over 4 million firms across the world and apply a recursive network community detection algorithm to reveal the hierarchical structure of geo-industrial clusters. We show that the resulting geo-industrial clusters exhibit a stronger association between the influx of educated-workers and financial performance, compared to existing aggregation units. Furthermore, our additional analysis of the skill sets of educated-workers supplements the relationship between the labor flow of educated-workers and productivity growth. We argue that geo-industrial clusters defined by labor flow provide better insights into the growth and the decline of the economy than other common economic units.

arXiv

Technological progress is leading to proliferation and diversification of trading venues, thus increasing the relevance of the long-standing question of market fragmentation versus consolidation. To address this issue quantitatively, we analyse systems of adaptive traders that choose where to trade based on their previous experience. We demonstrate that only based on aggregate parameters about trading venues, such as the demand to supply ratio, we can assess whether a population of traders will prefer fragmentation or specialization towards a single venue. We investigate what conditions lead to market fragmentation for populations with a long memory and analyse the stability and other properties of both fragmented and consolidated steady states. Finally we investigate the dynamics of populations with finite memory; when this memory is long the true long-time steady states are consolidated but fragmented states are strongly metastable, dominating the behaviour out to long times.

arXiv

In this paper we consider the worst-case model risk approach described in Glasserman and Xu (2014). Portfolio selection with model risk can be a challenging operational research problem. In particular, it presents an additional optimisation compared to the classical one. We find the analytical solution for the optimal mean-variance portfolio selection in the worst-case scenario approach. In the minimum-variance case, we prove that the analytical solution is significantly different from the one found numerically by Glasserman and Xu (2014) and that model risk reduces to an estimation risk. A detailed numerical example is provided.

arXiv

Dividend yields have been widely used in previous research to relate stock market valuations to cash flow fundamentals. However, this approach relies on the assumption that dividend yields are stationary. Due to the failure to reject the hypothesis of a unit root in the classical dividend-price ratio for the US stock market, Polimenis and Neokosmidis (2016) proposed the use of a modified dividend price ratio (mdp) as the deviation between d and p from their long run equilibrium, and showed that mdp provides substantially improved forecasting results over the classical dp ratio. Here, we extend that paper by performing multivariate regressions based on the Campbell-Shiller approximation, by utilizing a dynamic econometric procedure to estimate the modified dp, and by testing the modified ratios against reinvested dividend-yields. By comparing the performance of mdp and dp in the period after 1965, we are not only able to enhance the robustness of the findings, but also to debunk a possible false explanation that the enhanced mdp performance in predicting future returns comes from a capacity to predict the risk-free return component. Depending on whether one uses the recursive or population methodology to measure the performance of mdp, the Out-of-Sample performance gain is between 30% to 50%.

arXiv

Managing unemployment is one of the key issues in social policies. Unemployment insurance schemes are designed to cushion the financial and morale blow of loss of job but also to encourage the unemployed to seek new jobs more pro-actively due to the continuous reduction of benefit payments. In the present paper, a simple model of unemployment insurance is proposed with a focus on optimality of the individual's entry to the scheme. The corresponding optimal stopping problem is solved, and its similarity and differences with the perpetual American call option are discussed. Beyond a purely financial point of view, we argue that in the actuarial context the optimal decisions should take into account other possible preferences through a suitable utility function. Some examples in this direction are worked out.

arXiv

In the present paper we provide a two-step principal protection strategy obtained by combining a modification of the Constant Proportion Portfolio Insurance (CPPI) algorithm and a classical Option Based Portfolio Insurance (OBPI) mechanism. Such a novel approach consists in assuming that the percentage of wealth invested in stocks cannot go under a fixed level, called guaranteed minimum equity exposure, and using such an adjusted CPPI portfolio as the underlying of an option. The first stage ensures to overcome the so called cash-in risk, typically related to a standard CPPI technique, while the second one guarantees the equity market participation. To show the effectiveness of our proposal we provide a detailed computational analysis within the Heston-Vasicek framework, numerically comparing the evaluation of the price of European plain vanilla options when the underlying is either a purely risky asset, a standard CPPI portfolio and a CPPI with guaranteed minimum equity exposure.

arXiv

In the frictionless discrete time financial market of Bouchard et al.(2015) we consider a trader who, due to regulatory requirements or internal risk management reasons, is required to hedge a claim $\xi$ in a risk-conservative way relative to a family of probability measures $\mathcal{P}$. We first describe the evolution of $\pi_t(\xi)$ - the superhedging price at time $t$ of the liability $\xi$ at maturity $T$ - via a dynamic programming principle and show that $\pi_t(\xi)$ can be seen as a concave envelope of $\pi_{t+1}(\xi)$ evaluated at today's prices. Then we consider an optimal investment problem for a trader who is rolling over her robust superhedge and phrase this as a robust maximisation problem, where the expected utility of inter-temporal consumption is optimised subject to a robust superhedging constraint. This utility maximisation is carrried out under a new family of measures $\mathcal{P}^u$, which no longer have to capture regulatory or institutional risk views but rather represent trader's subjective views on market dynamics. Under suitable assumptions on the trader's utility functions, we show that optimal investment and consumption strategies exist and further specify when, and in what sense, these may be unique.

arXiv

We consider a discrete time financial market with proportional transaction costs under model uncertainty, and study a num\'eraire-based semi-static utility maximization problem with an exponential utility preference. The randomization techniques recently developed in \cite{BDT17} allow us to transform the original problem into a frictionless counterpart on an enlarged space. By suggesting a different dynamic programming argument than in \cite{bartl2016exponential}, we are able to prove the existence of the optimal strategy and the convex duality theorem in our context with transaction costs. In the frictionless framework, this alternative dynamic programming argument also allows us to generalize the main results in \cite{bartl2016exponential} to a weaker market condition. Moreover, as an application of the duality representation, some basic features of utility indifference prices are investigated in our robust setting with transaction costs.