# Research articles for the 2019-08-26

arXiv

Presidential debates are thought to provide an important public good by revealing information on candidates to voters. However, this may not always be the case. We consider an endogenous model of presidential debates in which an incumbent and a contender (who is privately informed about her own quality) publicly announce whether they are willing to participate in a public debate, after taking into account that a voter's choice of candidate depends on her beliefs regarding the candidates' qualities and on the state of nature. Surprisingly, it is found that in equilibrium a debate occurs or does not occur independently of the contender's quality or the sequence of the candidates' announcements to participate and therefore the announcements are uninformative.

arXiv

We propose a multi-factor polynomial framework to model and hedge long-term electricity contracts with delivery period. This framework has various advantages: the computation of forwards and correlation between different forwards are fully explicit, and the model can be calibrated to observed electricity forward curves easily and well. Electricity markets suffer from non-storability and poor medium- to long-term liquidity. Therefore, we suggest a rolling hedge which only uses liquid forward contracts and is risk-minimizing in the sense of F\"ollmer and Schweizer. We calibrate the model to over eight years of German power calendar year forward curves and investigate the quality of the risk-minimizing hedge over various time horizons.

arXiv

Some expansion methods have been proposed for approximately pricing options which has no exact closed formula. Benhamou et al. (2010) presents the smart expansion method that directly expands the expectation value of payoff function with respect to the volatility of volatility, then uses it to price options in the stochastic volatility model. In this paper, we apply their method to the stochastic volatility model with stochastic interest rates, and present the expansion formula for pricing options up to the second order. Then the numerical studies are performed to compare our approximation formula with the Monte-Carlo simulation. It is found that our formula shows the numerically comparable results with the method proposed by Grzelak et al. (2012) which uses the approximation of characteristic function.

arXiv

Many researchers both in academia and industry have long been interested in the stock market. Numerous approaches were developed to accurately predict future trends in stock prices. Recently, there has been a growing interest in utilizing graph-structured data in computer science research communities. Methods that use relational data for stock market prediction have been recently proposed, but they are still in their infancy. First, the quality of collected information from different types of relations can vary considerably. No existing work has focused on the effect of using different types of relations on stock market prediction or finding an effective way to selectively aggregate information on different relation types. Furthermore, existing works have focused on only individual stock prediction which is similar to the node classification task. To address this, we propose a hierarchical attention network for stock prediction (HATS) which uses relational data for stock market prediction. Our HATS method selectively aggregates information on different relation types and adds the information to the representations of each company. Specifically, node representations are initialized with features extracted from a feature extraction module. HATS is used as a relational modeling module with initialized node representations. Then, node representations with the added information are fed into a task-specific layer. Our method is used for predicting not only individual stock prices but also market index movements, which is similar to the graph classification task. The experimental results show that performance can change depending on the relational data used. HATS which can automatically select information outperformed all the existing methods.

arXiv

This paper surveys the evolution of industrial concentration of the Brazilian automotive market as well as its positioning in the worldmarket. Data available by OICA (International Organization of Motor Vehicle Manufacturers) were used to better understand the characteristics of the Brazilian market on the world stage. A cluster analysis algorithm (by the k-means technique) ranks Brazil with a concentration profile in a group of countries like US and South Korea, in contrast to countries such as Germany, Canada and Japan, or even France and Italy. It is rather usual to characterize the market structure through industrial concentration indices: we revisit CR ratios (concentration ratios), HHI (Herfindahl-Hirschman index), B (Rosenbluth index), and CCI (Horvath comprehensive concentration index). Data of Anfavea-Brazil (Associacao Nacional dos Fabricantes de Veiculos Automotores) were used to estimate these indices in the period 2012-2018 for the national automobile industry. The values obtained indicate that by 1998 the automotive sector was behaving as an oligopoly-differentiated. However, the values of more recent periods (particularly CR4) strongly indicate that the sector is currently moderately concentrated and is changing for a quasi-devolved market.

arXiv

In this paper we propose a new model for pricing stock and dividend derivatives. We jointly specify dynamics for the stock price and the dividend rate such that the stock price is positive and the dividend rate non-negative. In its simplest form, the model features a dividend rate that is mean-reverting around a constant fraction of the stock price. The advantage of directly specifying dynamics for the dividend rate, as opposed to the more common approach of modeling the dividend yield, is that it is easier to keep the distribution of cumulative dividends tractable. The model is non-affine but does belong to the more general class of polynomial processes, which allows us to compute all conditional moments of the stock price and the cumulative dividends explicitly. In particular, we have closed-form expressions for the prices of stock and dividend futures. Prices of stock and dividend options are accurately approximated using a moment matching technique based on the principle of maximal entropy.

arXiv

Revenue sharing contracts between Content Providers (CPs) and Internet Service Providers (ISPs) can act as leverage for enhancing the infrastructure of the Internet. ISPs can be incentivized to make investments in network infrastructure that improve Quality of Service (QoS) for users if attractive contracts are negotiated between them and CPs. The idea here is that part of the net profit gained by CPs are given to ISPs to invest in the network. The Moral Hazard economic framework is used to model such an interaction, in which a principal determines a contract, and an agent reacts by adapting her effort. In our setting, several competitive CPs interact through one common ISP. Two cases are studied: (i) the ISP differentiates between the CPs and makes a (potentially) different investment to improve the QoS of each CP, and (ii) the ISP does not differentiate between CPs and makes a common investment for both. The last scenario can be viewed as \emph{network neutral behavior} on the part of the ISP. We analyse the optimal contracts and show that the CP that can better monetize its demand always prefers the non-neutral regime. Interestingly, ISP revenue, as well as social utility, are also found to be higher under the non-neutral regime.

arXiv

The disclosure of the VW emission manipulation scandal caused a quasi-experimental market shock to the observable environmental quality of VW diesel vehicles. To investigate the market reaction to this shock, we collect data from a used-car online advertisement platform. We find that the supply of used VW diesel vehicles increases after the VW emission scandal. The positive supply side effects increase with the probability of manipulation. Furthermore, we find negative impacts on the asking prices of used cars subject to a high probability of manipulation. We rationalize these findings with a model for sorting by the environmental quality of used cars.

arXiv

Many large cities are found at locations with certain first nature advantages. Yet, those exogenous locational features may not be the most potent forces governing the spatial pattern of cities. In particular, population size, spacing and industrial composition of cities exhibit simple, persistent and monotonic relationships. Theories of economic agglomeration suggest that this regularity is a consequence of interactions between endogenous agglomeration and dispersion forces. This paper reviews the extant formal models that explain the spatial pattern together with the size distribution of cities, and discusses the remaining research questions to be answered in this literature. To obtain results about explicit spatial patterns of cities, a model needs to depart from the most popular two-region and systems-of-cities frameworks in urban and regional economics in which there is no variation in interregional distance. This is one of the major reasons that only few formal models have been proposed in this literature. To draw implications as much as possible from the extant theories, this review involves extensive discussions on the behavior of the many-region extension of these models. The mechanisms that link the spatial pattern of cities and the diversity in city sizes are also discussed in detail.

arXiv

This study examines the role of pawnshops as a risk-coping device in prewar Japan. Using data on pawnshop loans for more than 250 municipalities and exploiting the 1918-1920 influenza pandemic as a natural experiment, we find that the adverse health shock increased the total amount of loans from pawnshops. This is because those who regularly relied on pawnshops borrowed more money from them than usual to cope with the adverse health shock, and not because the number of people who used pawnshops increased.

arXiv

The choice of the ambiguity radius is critical when an investor uses the distributionally robust approach to address the issue that the portfolio optimization problem is sensitive to the uncertainties of the asset return distribution. It cannot be set too large because the larger the size of the ambiguity set the worse the portfolio return. It cannot be too small either; otherwise, one loses the robust protection. This tradeoff demands a financial understanding of the ambiguity set. In this paper, we propose a non-robust interpretation of the distributionally robust optimization (DRO) problem. By relating the impact of an ambiguity set to the impact of a non-robust chance constraint, our interpretation allows investors to understand the size of the ambiguity set through parameters that are directly linked to investment performance. We first show that for general $\phi$-divergences, a DRO problem is asymptotically equivalent to a class of mean-deviation problem, where the ambiguity radius controls investor's risk preference. Based on this non-robust reformulation, we then show that when a boundedness constraint is added to the investment strategy, the DRO problem can be cast as a chance-constrained optimization (CCO) problem without distributional uncertainties. If the boundedness constraint is removed, the CCO problem is shown to perform uniformly better than the DRO problem, irrespective of the radius of the ambiguity set, the choice of the divergence measure, or the tail heaviness of the center distribution. Our results apply to both the widely-used Kullback-Leibler (KL) divergence which requires the distribution of the objective function to be exponentially bounded, as well as those more general divergence measures which allow heavy tail ones such as student $t$ and lognormal distributions.