Research articles for the 2020-12-14
arXiv
We study an optimal dividend problem for an insurer who simultaneously controls investment weights in a financial market, liability ratio in the insurance business, and dividend payout rate. The insurer seeks an optimal strategy to maximize her expected utility of dividend payments over an infinite horizon. By applying a perturbation approach, we obtain the optimal strategy and the value function in closed form for log and power utility. We conduct an economic analysis to investigate the impact of various model parameters and risk aversion on the insurer's optimal strategy.
arXiv
We provide an economically sound micro-foundation to linear price impact models, by deriving them as the equilibrium of a suitable agent-based system. Our setup generalizes the well-known Kyle model, by dropping the assumption of a terminal time at which fundamental information is revealed so to describe a stationary market, while retaining agents' rationality and asymmetric information. We investigate the stationary equilibrium for arbitrary Gaussian noise trades and fundamental information, and show that the setup is compatible with universal price diffusion at small times, and non-universal mean-reversion at time scales at which fluctuations in fundamentals decay. Our model provides a testable relation between volatility of prices, magnitude of fluctuations in fundamentals and level of volume traded in the market.
arXiv
This article provides a simple explanation of the asymptotic concavity of the price impact of a meta-order via the microstructural properties of the market. This explanation is made more precise by a model in which the local relationship between the order flow and the fundamental price (i.e. the local price impact) is linear, with a constant slope, which makes the model dynamically consistent. Nevertheless, the expected impact on midprice from a large sequence of co-directional trades is nonlinear and asymptotically concave. The main practical conclusion of the proposed explanation is that, throughout a meta-order, the volumes at the best bid and ask prices change (on average) in favor of the executor. This conclusion, in turn, relies on two more concrete predictions, one of which can be tested, at least for large-tick stocks, using publicly available market data.
arXiv
The aim of this paper is to investigate the use of close formula approximation for pricing European mortgage options. Under the assumption of logistic duration and normal mortgage rates the underlying price at the option expiry is approximated by shifted lognormal or regular lognormal distribution by matching moments. Once the price function is approximated by lognormal distributions, the option price can be computed directly as an integration of the distribution function over the payoff at the option expiry by using Black-Scholes-Merton close formula. We will see that lower curvature levels correspond to positively skewness price distributions and in this case lognormal approximation leads to close parametric formula representation in terms of all model parameters. The proposed methodologies are tested against Monte Carlo approach under different market and contract parameters and the tests confirmed that the close form approximation have a very good accuracy.
arXiv
This paper estimates the unemployment gap (the actual unemployment rate minus the efficient unemployment rate) using the sufficient-statistic approach from public economics. While lowering unemployment puts more people into work, it forces firms to post more vacancies and devote more resources to recruiting. This unemployment-vacancy tradeoff, governed by the Beveridge curve, determines the efficient unemployment rate. Accordingly, in any model with a Beveridge curve, the unemployment gap depends on three sufficient statistics: elasticity of the Beveridge curve, social cost of unemployment, and cost of recruiting. Applying this novel formula to the United States, we find that the efficient unemployment rate varies between 3.0% and 5.4% since 1951, and has been stable between 3.8% and 4.6% since 1990. As a result, the unemployment gap is countercyclical, reaching 6 percentage points in deep slumps. Thus the US labor market is inefficient---especially inefficiently slack in slumps. The unemployment gap is in turn a crucial statistic to design labor-market and macroeconomic policies.
arXiv
The success of a cross-sectional systematic strategy depends critically on accurately ranking assets prior to portfolio construction. Contemporary techniques perform this ranking step either with simple heuristics or by sorting outputs from standard regression or classification models, which have been demonstrated to be sub-optimal for ranking in other domains (e.g. information retrieval). To address this deficiency, we propose a framework to enhance cross-sectional portfolios by incorporating learning-to-rank algorithms, which lead to improvements of ranking accuracy by learning pairwise and listwise structures across instruments. Using cross-sectional momentum as a demonstrative case study, we show that the use of modern machine learning ranking algorithms can substantially improve the trading performance of cross-sectional strategies -- providing approximately threefold boosting of Sharpe Ratios compared to traditional approaches.
arXiv
Three radical ideas are presented. First, that the rationale for cancellation of principal is not justified in modern banking. Second, that non-cancellation of loan principal upon payment cures an old problem of maintenance of positive equity in the private sector. And third, that crediting this money to local government, or to at-risk ventures that create new utility value creates an additional virtuous monetary circuit that ties finances of government directly to commercial activity.
Taking these steps can cure a problem I have identified with modern monetary theory, which is that breaking the monetary circuit of taxation in the minds of politicians will free them from centuries of restraint, optimizing their opportunities for implementing tyranny. It maintains and strengthens the current circuit, creating a new, more direct monetary circuit that in some respects combats inequality.
arXiv
In this paper we characterize the niveloidal preferences that satisfy the Weak Order, Monotonicity, Archimedean, and Weak C-Independence Axioms from the point of view of an intra-personal, leader-follower game. We also show that the leader's strategy space can serve as an ambiguity aversion index.
arXiv
Recent developments in deep learning techniques have motivated intensive research in machine learning-aided stock trading strategies. However, since the financial market has a highly non-stationary nature hindering the application of typical data-hungry machine learning methods, leveraging financial inductive biases is important to ensure better sample efficiency and robustness. In this study, we propose a novel method of constructing a portfolio based on predicting the distribution of a financial quantity called residual factors, which is known to be generally useful for hedging the risk exposure to common market factors. The key technical ingredients are twofold. First, we introduce a computationally efficient extraction method for the residual information, which can be easily combined with various prediction algorithms. Second, we propose a novel neural network architecture that allows us to incorporate widely acknowledged financial inductive biases such as amplitude invariance and time-scale invariance. We demonstrate the efficacy of our method on U.S. and Japanese stock market data. Through ablation experiments, we also verify that each individual technique contributes to improving the performance of trading strategies. We anticipate our techniques may have wide applications in various financial problems.
arXiv
We perform time-dependent extended Tsallis statistics analysis on stock market indices. Specifically, we evaluate the q-triplet for particular time periods of the market with the purpose of demonstrating the temporal dependence of the extended characteristics of the underlying market dynamics. We apply the analysis on a four major global markets (S&P 500, NIKKEI, DAX, LSE). For comparison, we compute time-dependent Generalized Hurst Exponents (GHE) Hq using the GHE method, thus estimating the temporal evolution of the multiscaling characteristics of the index dynamics. We focus on periods before and after critical market events such as stock market bubbles (2000 dot.com bubble, Japanese 1990 bubble, 2008 US real estate crisis) and find that the q-triplet values significantly differ among these periods indicating that in the rising period before a bubble break, the underlying extended statistics of the market dynamics strongly deviates from purely stochastic behavior, whereas, after the breakdown, it gradually converges to the Gaussian-like behavior which is a characteristic of an efficient market. Differences between endogenous and exogenous stock market crises are also captured by the temporal changes in the Tsallis q-triplet. We also introduce two new time-dependent empirical metrics (Q-metrics) that are functions of the Tsallis q-triplet. We apply them to the above stock market index price timeseries and discuss the significance of their temporal dependence on market dynamics and the possibility of using them, together with the temporal changes in the q-triplet, as signaling tools for future market events such as the development of a market bubble.
arXiv
We investigate the optimal portfolio deleveraging (OPD) problem with permanent and temporary price impacts, where the objective is to maximize equity while meeting a prescribed debt/equity requirement. We take the real situation with cross impact among different assets into consideration. The resulting problem is, however, a non-convex quadratic program with a quadratic constraint and a box constraint, which is known to be NP-hard. In this paper, we first develop a successive convex optimization (SCO) approach for solving the OPD problem and show that the SCO algorithm converges to a KKT point of its transformed problem. Second, we propose an effective global algorithm for the OPD problem, which integrates the SCO method, simple convex relaxation and a branch-and-bound framework, to identify a global optimal solution to the OPD problem within a pre-specified $\epsilon$-tolerance. We establish the global convergence of our algorithm and estimate its complexity. We also conduct numerical experiments to demonstrate the effectiveness of our proposed algorithms with both the real data and the randomly generated medium- and large-scale OPD problem instances.
arXiv
We study the disproportionate impact of the lockdown as a result of the COVID-19 outbreak on female and male academics' research productivity in social science. The lockdown has caused substantial disruptions to academic activities, requiring people to work from home. How this disruption affects productivity and the related gender equity is an important operations and societal question. We collect data from the largest open-access preprint repository for social science on 41,858 research preprints in 18 disciplines produced by 76,832 authors across 25 countries over a span of two years. We use a difference-in-differences approach leveraging the exogenous pandemic shock. Our results indicate that, in the 10 weeks after the lockdown in the United States, although the total research productivity increased by 35%, female academics' productivity dropped by 13.9% relative to that of male academics. We also show that several disciplines drive such gender inequality. Finally, we find that this intensified productivity gap is more pronounced for academics in top-ranked universities, and the effect exists in six other countries. Our work points out the fairness issue in productivity caused by the lockdown, a finding that universities will find helpful when evaluating faculty productivity. It also helps organizations realize the potential unintended consequences that can arise from telecommuting.
arXiv
Different regional reactions to war in 1894 and 1900 can significantly impact Chinese imports in 2001. As international relationship gets tense and China rises, international conflicts could decrease trade.We analyze impact of historic political conflict. We measure regional change of number of people passing imperial exam because of war. War leads to an unsuccessful reform and shocks elites. Elites in different regions have different ideas about modernization, and the change of number of people passing exam is quite different in different regions after war. Regional number of people passing exam increases 1% after war, imports from then empires decrease 2.050% in 2001, and this shows impact of cultural barrier. Manufactured goods can be impacted because brands can be identified easily. Risk aversion of expensive products in conservative regions can increase imports of equipment. Value chains need deep trust, and this decreases imports of foreign company and assembly trade.
arXiv
We examine the long-term behavior of a Bayesian agent who has a misspecified belief about the time lag between actions and feedback, and learns about the payoff consequences of his actions over time. Misspecified beliefs about time lags result in attribution errors, which have no long-term effect when the agent's action converges, but can lead to arbitrarily large long-term inefficiencies when his action cycles. Our proof uses concentration inequalities to bound the frequency of action switches, which are useful to study learning problems with history dependence. We apply our methods to study a policy choice game between a policy-maker who has a correctly specified belief about the time lag and the public who has a misspecified belief.
arXiv
In this paper, we study general monetary risk measures (without any convexity or weak convexity). A monetary (respectively, positively homogeneous) risk measure can be characterized as the lower envelope of a family of convex (respectively, coherent) risk measures. The proof does not depend on but easily leads to the classical representation theorems for convex and coherent risk measures. When the law-invariance and the SSD (second-order stochastic dominance)-consistency are involved, it is not the convexity (respectively, coherence) but the comonotonic convexity (respectively, comonotonic coherence) of risk measures that can be used for such kind of lower envelope characterizations in a unified form. The representation of a law-invariant risk measure in terms of VaR is provided.
arXiv
We consider a variant of Cournot competition, where multiple firms allocate the same amount of resource across multiple markets. We prove that the game has a unique pure-strategy Nash equilibrium (NE), which is symmetric and is characterized by the maximal point of a "potential function". The NE is globally asymptotically stable under the gradient adjustment process, and is not socially optimal in general. An application is in transportation, where drivers allocate time over a street network.
arXiv
This study investigates the relationship of the equity home bias with 1) the country-level behavioral unfamiliarity, and 2) the home-foreign return correlation. We set the hypotheses that 1) unfamiliarity about foreign equities plays a role in the portfolio set up and 2) the correlation of return on home and foreign equities affects the equity home bias when there is a lack of information about foreign equities. For the empirical analysis, the proportion of respondents to the question "How much do you trust? - People you meet for the first time" is used as a proxy measure for country-specific unfamiliarity. Based on the eleven developed countries for which such data are available, we implement a feasible generalized linear squares (FGLS) method. Empirical results suggest that country-specific unfamiliarity has a significant and positive correlation with the equity home bias. When it comes to the correlation of return between home and foreign equities, we identify that there is a negative correlation with the equity home bias, which is against our hypothesis. Moreover, an excess return on home equities compared to foreign ones is found to have a positive correlation with the equity home bias, which is consistent with the comparative statics only if foreign investors have a sufficiently higher risk aversion than domestic investors. We check the robustness of our empirical analysis by fitting alternative specifications and use a log-transformed measure of the equity home bias, resulting in consistent results with ones with the original measure.
arXiv
Industries can enter one country first, and then enter its neighbors' markets. Firms in the industry can expand trade network through the export behavior of other firms in the industry. If a firm is dependent on a few foreign markets, the political risks of the markets will hurt the firm. The frequent trade disputes reflect the importance of the choice of export destinations. Although the market diversification strategy was proposed before, most firms still focus on a few markets, and the paper shows reasons.In this paper, we assume the entry cost of firms is not all sunk cost, and show 2 ways that product heterogeneity impacts extensive margin of exports theoretically and empirically. Firstly, the increase in product heterogeneity promotes the increase in market power and profit, and more firms are able to pay the entry cost. If more firms enter the market, the information of the market will be known by other firms in the industry. Firms can adjust their behavior according to other firms, so the information changes entry cost and is not sunk cost completely. The information makes firms more likely to entry the market, and enter the surrounding markets of existing markets of other firms in the industry. When firms choose new markets, they tend to enter the markets with few competitors first.Meanwhile, product heterogeneity will directly affect the firms' network expansion, and the reduction of product heterogeneity will increase the value of peer information. This makes firms more likely to entry the market, and firms in the industry concentrate on the markets.
arXiv
In this paper we study the so-called minimum income condition order, which is used in some day-ahead electricity power exchanges to represent the production-related costs of generating units. This order belongs to the family of complex orders, which imply non-convexities in the market clearing problem. We demonstrate via simple numerical examples that if more of such bids are present in the market, their interplay may open the possibility of strategic bidding. More precisely, we show that by the manipulation of bid parameters, a strategic player may increase its own profit and potentially induce the deactivation of an other minimum income condition order, which would be accepted under truthful bidding. Furthermore, we show that if we modify the objective function used in the market clearing according to principles suggested in the literature, it is possible to prevent the possibility of such strategic bidding, but the modification raises other issues.
arXiv
Inspired by a series of remarkable papers in recent years that use Deep Neural Nets to substantially speed up the calibration of pricing models, we investigate the use of Chebyshev Tensors instead of Deep Neural Nets. Given that Chebyshev Tensors can be, under certain circumstances, more efficient than Deep Neural Nets at exploring the input space of the function to be approximated, due to their exponential convergence, the problem of calibration of pricing models seems, a priori, a good case where Chebyshev Tensors can be used.
In this piece of research, we built Chebyshev Tensors, either directly or with the help of the Tensor Extension Algorithms, to tackle the computational bottleneck associated with the calibration of the rough Bergomi volatility model. Results are encouraging as the accuracy of model calibration via Chebyshev Tensors is similar to that when using Deep Neural Nets, but with building efforts that range between 5 and 100 times more efficient in the experiments run. Our tests indicate that when using Chebyshev Tensors, the calibration of the rough Bergomi volatility model is around 40,000 times more efficient than if calibrated via brute-force (using the pricing function).
arXiv
Explaining why humans cooperate in anonymous contexts is a major goal of human behavioral ecology, cultural evolution, and related fields. What predicts cooperation in anonymous contexts is inconsistent across populations, levels of analysis, and games. For instance, market integration is a key predictor across ethnolinguistic groups but has inconsistent predictive power at the individual level. We adapt an idea from 19th-century sociology: people in societies with greater overlap in ties across domains among community members (Durkheim's "mechanical" solidarity) will cooperate more with their network partners and less in anonymous contexts than people in societies with less overlap ("organic" solidarity). This hypothesis, which can be tested at the individual and community level, assumes that these two types of societies differ in the importance of keeping existing relationships as opposed to recruiting new partners. Using multiplex networks, we test this idea by comparing cooperative tendencies in both anonymous experimental games and real-life communal labor tasks across 9 Makushi villages in Guyana that vary in the degree of within-village overlap. Average overlap in a village predicts both real-world cooperative and anonymous interactions in the predicted direction; individual overlap also has effects in the expected direction. These results reveal a consistent patterning of cooperative tendencies at both individual and local levels and contribute to the debate over the emergence of norms for cooperation among humans. Multiplex overlap can help us understand inconsistencies in previous studies of cooperation in anonymous contexts and is an unexplored dimension with explanatory power at multiple levels of analysis.
arXiv
This article is a response to a question many economists ask: how can I improve my first draft? The first section addresses a common approach to doing this: treating problems visible on the surface. This paper presents six such symptoms along with treatments for them. The second section addresses another approach, one that often turns out to be more effective: looking deeper for the underlying malady that causes several symptoms to show up on the surface and treating it. This paper presents five common maladies that matter for eventual outcomes, such as publishing and hiring.