Research articles for the 2020-12-06
arXiv
We obtain a canonical representation for block matrices. The representation facilitates simple computation of the determinant, the matrix inverse, and other powers of a block matrix, as well as the matrix logarithm and the matrix exponential. These results are particularly useful for block covariance and block correlation matrices, where evaluation of the Gaussian log-likelihood and estimation are greatly simplified. We illustrate this with an empirical application using a large panel of daily asset returns. Moreover, the representation paves new ways to regularizing large covariance/correlation matrices and to test block structures in matrices.
arXiv
We introduce a novel parametrization of the correlation matrix. The reparametrization facilitates modeling of correlation and covariance matrices by an unrestricted vector, where positive definiteness is an innate property. This parametrization can be viewed as a generalization of Fisther's Z-transformation to higher dimensions and has a wide range of potential applications. An algorithm for reconstructing the unique n x n correlation matrix from any d-dimensional vector (with d = n(n-1)/2) is provided, and we derive its numerical complexity.
arXiv
Why do biased predictions arise? What interventions can prevent them? We evaluate 8.2 million algorithmic predictions of math performance from $\approx$400 AI engineers, each of whom developed an algorithm under a randomly assigned experimental condition. Our treatment arms modified programmers' incentives, training data, awareness, and/or technical knowledge of AI ethics. We then assess out-of-sample predictions from their algorithms using randomized audit manipulations of algorithm inputs and ground-truth math performance for 20K subjects. We find that biased predictions are mostly caused by biased training data. However, one-third of the benefit of better training data comes through a novel economic mechanism: Engineers exert greater effort and are more responsive to incentives when given better training data. We also assess how performance varies with programmers' demographic characteristics, and their performance on a psychological test of implicit bias (IAT) concerning gender and careers. We find no evidence that female, minority and low-IAT engineers exhibit lower bias or discrimination in their code. However, we do find that prediction errors are correlated within demographic groups, which creates performance improvements through cross-demographic averaging. Finally, we quantify the benefits and tradeoffs of practical managerial or policy interventions such as technical advice, simple reminders, and improved incentives for decreasing algorithmic bias.
arXiv
We show that, for a range of models related to production networks, competitive equilibria can be recovered as solutions to dynamic programs. Although these programs fail to be contractive, we show that they are fully tractable when matched with the right tools of analysis. These tools add analytical power and open new avenues for computation. As an illustration, we cover applications related to Coase's theory of the firm, including equilibria in linear production chains with transaction costs and other kinds of frictions, as well as equilibria in production networks with multiple partners. We show how the same techniques also extend to other equilibrium and decision problems, such as the distribution of management layers within firms and the spatial distribution of cities.
arXiv
A minimal central bank credibility, with a non-zero probability of not renegning his commitment ("quasi-commitment"), is a necessary condition for anchoring inflation expectations and stabilizing inflation dynamics. By contrast, a complete lack of credibility, with the certainty that the policy maker will renege his commitment ("optimal discretion"), leads to the local instability of inflation dynamics. In the textbook example of the new-Keynesian Phillips curve, the response of the policy instrument to inflation gaps for optimal policy under quasi-commitment has an opposite sign than in optimal discretion, which explains this bifurcation.
arXiv
We consider an infinite-horizon optimal consumption problem for an individual who forms a consumption habit based on an exponentially-weighted average of her past rate of consumption. The novelty of our approach is in introducing habit formation through a constraint, rather than through the objective function, as is customary in the existing habit-formation literature. Specifically, we require that the individual consume at a rate that is greater than a certain proportion $\alpha$ of her consumption habit. Our habit-formation model allows for both addictive ($\alpha=1$) and non-addictive ($0<\alpha<1$) habits. Assuming that the individual invests in a risk-free market, we formulate and solve a deterministic control problem to maximize the discounted CRRA utility of the individual's consumption-to-habit process subject to the said habit-formation constraint. We derive the optimal consumption policies explicitly in terms of the solution of a nonlinear free-boundary problem, which we analyze in detail. Impatient individuals (those with sufficiently large utility discount rates) always consume above the minimum rate; thus, they eventually attain the minimum wealth-to-habit ratio. Patient individuals (those with small utility discount rates) consume at the minimum rate if their wealth-to-habit ratio is below a threshold, and above it otherwise. By consuming patiently, these individuals maintain a wealth-to-habit ratio that is greater than the minimum acceptable level.
arXiv
We study a quadratic hedging problem for a sequence of contingent claims with random weights in discrete time. We obtain the optimal hedging strategy explicitly in a recursive representation, without imposing the non-degeneracy (ND) condition on the model and square integrability on hedging strategies. We relate the general results to hedging under random horizon and fair pricing in the quadratic sense. We illustrate the significance of our results in an example in which the ND condition fails.
arXiv
Tail risk protection is in the focus of the financial industry and requires solid mathematical and statistical tools, especially when a trading strategy is derived. Recent hype driven by machine learning (ML) mechanisms has raised the necessity to display and understand the functionality of ML tools. In this paper, we present a dynamic tail risk protection strategy that targets a maximum predefined level of risk measured by Value-At-Risk while controlling for participation in bull market regimes. We propose different weak classifiers, parametric and non-parametric, that estimate the exceedance probability of the risk level from which we derive trading signals in order to hedge tail events. We then compare the different approaches both with statistical and trading strategy performance, finally we propose an ensemble classifier that produces a meta tail risk protection strategy improving both generalization and trading performance.
arXiv
By 2030 Austria aims to meet 100% of its electricity demand from domestic renewable sources, with most of the additional generation coming from wind and solar energy. Apart from the benefit of reducing CO2 emissions and, potentially, system cost, wind power is associated with negative impacts at the local level, such as interference with landscape aesthetics. Some of these impacts might be avoided by using alternative renewable energy technologies. Thus, we quantify the opportunity cost of wind power versus its best feasible alternative solar photovoltaics, using the power system model medea. Our findings suggest that the cost of undisturbed landscapes is considerable, particularly when solar PV is mainly realized on roof-tops. Under a wide range of assumptions, the opportunity cost of wind power is high enough to allow for significant compensation of the ones affected by local, negative wind turbine externalities.
arXiv
How do ethical arguments affect AI adoption in business? We randomly expose business decision-makers to arguments used in AI fairness activism. Arguments emphasizing the inescapability of algorithmic bias lead managers to abandon AI for manual review by humans and report greater expectations about lawsuits and negative PR. These effects persist even when AI lowers gender and racial disparities and when engineering investments to address AI fairness are feasible. Emphasis on status quo comparisons yields opposite effects. We also measure the effects of "scientific veneer" in AI ethics arguments. Scientific veneer changes managerial behavior but does not asymmetrically benefit favorable (versus critical) AI activism.