Research articles for the 2021-07-11
arXiv
We study the impacts of incomplete information on centralized one-to-one matching markets. We focus on the commonly used Deferred Acceptance mechanism (Gale and Shapley, 1962). We show that many complete-information results are fragile to a small infusion of uncertainty about others' preferences.
arXiv
Financial markets and more generally macro-economic models involve a large number of individuals interacting through variables such as prices resulting from the aggregate behavior of all the agents. Mean field games have been introduced to study Nash equilibria for such problems in the limit when the number of players is infinite. The theory has been extensively developed in the past decade, using both analytical and probabilistic tools, and a wide range of applications have been discovered, from economics to crowd motion. More recently the interaction with machine learning has attracted a growing interest. This aspect is particularly relevant to solve very large games with complex structures, in high dimension or with common sources of randomness. In this chapter, we review the literature on the interplay between mean field games and deep learning, with a focus on three families of methods. A special emphasis is given to financial applications.
arXiv
I analyze how a careerist delegate carries out reform decisions and implementation under alternative information environments. Regardless of his true policy preference, the delegate seeks retention and tries to signal to a principal that he shares an aligned policy predisposition. Given this pandering incentive, the principal best motivates the delegate's implementation if she can commit to a retention rule that is pivotal on reform outcomes. I characterize an "informativeness condition" under which this retention rule is endogenous, provided that the principal uses an opaque information policy: she observes the delegate's policy choice and outcomes, but not the effort. Under other information policies, the principal has to reward congruent policy choices rather than good policy outcomes; her policy interest is damaged by failing to sufficiently motivate reform implementation.
arXiv
How will the novel coronavirus evolve? I study a simple SEPAIRD model, in which mutations may change the properties of the virus and its associated disease stochastically and antigenic drifts allow new variants to partially evade immunity. I show analytically that variants with higher infectiousness, longer disease duration, and shorter latency period prove to be fitter. "Smart" containment policies targeting symptomatic individuals may redirect the evolution of the virus, as they give an edge to variants with a longer incubation period and a higher share of asymptomatic infections. Reduced mortality, on the other hand, does not per se prove to be an evolutionary advantage. I then implement this model as an agent-based simulation model in order to explore its aggregate dynamics. Monte Carlo simulations show that a) containment policy design has an impact on both speed and direction of viral evolution, b) the virus may circulate in the population indefinitely, provided that containment efforts are too relaxed and the propensity of the virus to escape immunity is high enough, and crucially c) that it may not be possible to distinguish between a slowly and a rapidly evolving virus looking only at short-term epidemiological outcomes. Thus, what looks like a successful mitigation strategy in the short run, may prove to have devastating long-run effects. These results suggest that optimal containment policy must take the propensity of the virus to mutate and escape immunity into account, strengthening the case for genetic and antigenic surveillance even in the early stages of an epidemic.
arXiv
We show that the barrier function in Root's solution to the Skorokhod embedding problem is continuous and finite at every point where the target measure has no atom and its absolutely continuous part is locally bounded away from zero.
arXiv
Green hydrogen can help to decarbonize parts of the transportation sector, but its power sector interactions are not well understood. It may contribute to integrating variable renewable energy sources if production is sufficiently flexible in time. Using an open-source co-optimization model of the power sector and four options for supplying hydrogen at German filling stations, we find a trade-off between energy efficiency and temporal flexibility: for lower shares of renewables and hydrogen, more energy-efficient and less flexible small-scale on-site electrolysis is optimal. For higher shares of renewables and/or hydrogen, more flexible but less energy-efficient large-scale hydrogen supply chains gain importance as they allow disentangling hydrogen production from demand via storage. Liquid hydrogen emerges as particularly beneficial, followed by liquid organic hydrogen carriers and gaseous hydrogen. Large-scale hydrogen supply chains can deliver substantial power sector benefits, mainly through reduced renewable surplus generation. Energy modelers and system planners should consider the distinct flexibility characteristics of hydrogen supply chains in more detail when assessing the role of green hydrogen in future energy transition scenarios.
arXiv
We develop a new simulation method for multidimensional diffusions with sticky boundaries. The challenge comes from simulating the sticky boundary behavior, for which standard methods like the Euler scheme fail. We approximate the sticky diffusion process by a multidimensional continuous time Markov chain (CTMC), for which we can simulate easily. We develop two ways of constructing the CTMC: approximating the infinitesimal generator of the sticky diffusion by finite difference using standard coordinate directions, and matching the local moments using the drift and the eigenvectors of the covariance matrix as transition directions. The first approach does not always guarantee a valid Markov chain whereas the second one can. We show that both construction methods yield a first order simulation scheme, which can capture the sticky behavior and it is free from the curse of dimensionality. We apply our method to two applications: a multidimensional Brownian motion with all dimensions sticky which arises as the limit of a queuing system with exceptional service policy, and a multi-factor short rate model for low interest rate environment in which the stochastic factors are unbounded but the short rate is sticky at zero.
arXiv
Stock market movements are influenced by public and private information shared through news articles, company reports, and social media discussions. Analyzing these vast sources of data can give market participants an edge to make profit. However, the majority of the studies in the literature are based on traditional approaches that come short in analyzing unstructured, vast textual data. In this study, we provide a review on the immense amount of existing literature of text-based stock market analysis. We present input data types and cover main textual data sources and variations. Feature representation techniques are then presented. Then, we cover the analysis techniques and create a taxonomy of the main stock market forecast models. Importantly, we discuss representative work in each category of the taxonomy, analyzing their respective contributions. Finally, this paper shows the findings on unaddressed open problems and gives suggestions for future work. The aim of this study is to survey the main stock market analysis models, text representation techniques for financial market prediction, shortcomings of existing techniques, and propose promising directions for future research.