# Research articles for the 2020-12-28

arXiv

The paper introduces a very simple and fast computation method for high-dimensional integrals to solve high-dimensional Kolmogorov partial differential equations (PDEs). The new machine learning-based method is obtained by solving a stochastic weighted minimization with stochastic gradient descent which is inspired by a high-order weak approximation scheme for stochastic differential equations (SDEs) with Malliavin weights. Then solutions to high-dimensional Kolmogorov PDEs or expectations of functionals of solutions to high-dimensional SDEs are accurately approximated without suffering from the curse of dimensionality. Numerical examples for PDEs and SDEs up to 100 dimensions are shown by using second and third-order discretization schemes in order to demonstrate the effectiveness of our method.

arXiv

We consider games of chance played by someone with external capital that cannot be applied to the game and determine how this affects risk-adjusted optimal betting. Specifically, we focus on Kelly optimization as a metric, optimizing the expected logarithm of total capital including both capital in play and the external capital. For games with multiple rounds, we determine the optimal strategy through dynamic programming and construct a close approximation through the WKB method. The strategy can be described in terms of short-term utility functions, with risk aversion depending on the ratio of the amount in the game to the external money. Thus, a rational player's behavior varies between conservative play that approaches Kelly strategy as they are able to invest a larger fraction of total wealth and extremely aggressive play that maximizes linear expectation when a larger portion of their capital is locked away. Because you always have expected future productivity to account for as external resources, this goes counter to the conventional wisdom that super-Kelly betting is a ruinous proposition.

arXiv

This paper studies the equilibrium price of a continuous time asset traded in a market with heterogeneous investors. We consider a positive mean reverting asset and two groups of investors who have different beliefs on the speed of mean reversion and the mean level. We provide an equivalent condition for bubbles to exist and show that price bubbles may not form even though there are heterogeneous beliefs. This condition is directly related to the drift term of the asset. In addition, we characterize the minimal equilibrium price as a unique $C^2$ solution of a differential equation and express it using confluent hypergeometric functions.

arXiv

The objective of this paper is to verify that current cutting-edge artificial intelligence technology, deep reinforcement learning, can be applied to portfolio management. We improve on the existing Deep Reinforcement Learning Portfolio model and make many innovations. Unlike many previous studies on discrete trading signals in portfolio management, we make the agent to short in a continuous action space, design an arbitrage mechanism based on Arbitrage Pricing Theory,and redesign the activation function for acquiring action vectors, in addition, we redesign neural networks for reinforcement learning with reference to deep neural networks that process image data. In experiments, we use our model in several randomly selected portfolios which include CSI300 that represents the market's rate of return and the randomly selected constituents of CSI500. The experimental results show that no matter what stocks we select in our portfolios, we can almost get a higher return than the market itself. That is to say, we can defeat market by using deep reinforcement learning.

arXiv

This paper proposes a simple technical approach for the analytical derivation of Point-in-Time PD (probability of default) forecasts, with minimal data requirements. The inputs required are the current and future Through-the-Cycle PDs of the obligors, their last known default rates, and a measurement of the systematic dependence of the obligors. Technically, the forecasts are made from within a classical asset-based credit portfolio model, with the additional assumption of a simple (first/second order) autoregressive process for the systematic factor. This paper elaborates in detail on the practical issues of implementation, especially on the parametrization alternatives. We also show how the approach can be naturally extended to low-default portfolios with volatile default rates, using Bayesian methodology. Furthermore, expert judgments on the current macroeconomic state, although not necessary for the forecasts, can be embedded into the model using the Bayesian technique. The resulting PD forecasts can be used for the derivation of expected lifetime credit losses as required by the newly adopted accounting standard IFRS 9. In doing so, the presented approach is endogenous, as it does not require any exogenous macroeconomic forecasts, which are notoriously unreliable and often subjective. Also, it does not require any dependency modeling between PDs and macroeconomic variables, which often proves to be cumbersome and unstable.

arXiv

We have embedded the classical theory of stochastic finance into a differential geometric framework called Geometric Arbitrage Theory and show that it is possible to:

--Write arbitrage as curvature of a principal fibre bundle.

--Parameterize arbitrage strategies by its holonomy.

--Give the Fundamental Theorem of Asset Pricing a differential homotopic characterization.

--Characterize Geometric Arbitrage Theory by five principles and show they they are consistent with the classical theory of stochastic finance.

--Derive for a closed market the equilibrium solution for market portfolio and dynamics in the cases where:

-->Arbitrage is allowed but minimized.

-->Arbitrage is not allowed.

--Prove that the no-free-lunch-with-vanishing-risk condition implies the zero curvature condition. The converse is in general not true and additionally requires the Novikov condition for the instantaneous Sharpe Ratio Dynamics to be satisfied.

arXiv

The controversies around the 2020 US presidential elections certainly casts serious concerns on the efficiency of the current voting system in representing the people's will. Is the naive Plurality voting suitable in an extremely polarized political environment? Alternate voting schemes are gradually gaining public support, wherein the voters rank their choices instead of just voting for their first preference. However they do not capture certain crucial aspects of voter preferences like disapprovals and negativities against candidates. I argue that these unexpressed negativities are the predominant source of polarization in politics. I propose a voting scheme with an explicit expression of these negative preferences, so that we can simultaneously decipher the popularity as well as the polarity of each candidate. The winner is picked by an optimal tradeoff between the most popular and the least polarizing candidate. By penalizing the candidates for their polarization, we can discourage the divisive campaign rhetorics and pave way for potential third party candidates.

arXiv

This study presents for the first time the SWB-J index, a subjective well-being indicator for Japan based on Twitter data. The index is composed by eight dimensions of subjective well-being and is estimated relying on Twitter data by using human supervised sentiment analysis. The index is then compared with the analogous SWB-I index for Italy, in order to verify possible analogies and cultural differences. Further, through structural equation models, a causal assumption is tested to see whether the economic and health conditions of the country influence the well-being latent variable and how this latent dimension affects the SWB-J and SWB-I indicators. It turns out that, as expected, the economic and health welfare is only one aspect of the multidimensional well-being that is captured by the Twitter-based indicator.

arXiv

A market portfolio is a portfolio in which each asset is held at a weight proportional to its market value. Functionally generated portfolios are portfolios for which the logarithmic return relative to the market portfolio can be decomposed into a function of the market weights and a process of locally finite variation, and this decomposition is convenient for characterizing the long-term behavior of the portfolio. A permutation-weighted portfolio is a portfolio in which the assets are held at weights proportional to a permutation of their market values, and such a portfolio is functionally generated only for markets with two assets (except for the identity permutation). A reverse-weighted portfolio is a portfolio in which the asset with the greatest market weight is assigned the smallest market weight, the asset with the second-largest weight is assigned the second-smallest, and so forth. Although the reverse-weighted portfolio in a market with four or more assets is not functionally generated, it is still possible to characterize its long-term behavior using rank-based methods. This result is applied to a market of commodity futures, where we show that the reverse price-weighted portfolio substantially outperforms the price-weighted portfolio from 1977-2018.

arXiv

The popularity of business intelligence (BI) systems to support business analytics has tremendously increased in the last decade. The determination of data items that should be stored in the BI system is vital to ensure the success of an organisation's business analytic strategy. Expanding conventional BI systems often leads to high costs of internally generating, cleansing and maintaining new data items whilst the additional data storage costs are in many cases of minor concern -- what is a conceptual difference to big data systems. Thus, potential additional insights resulting from a new data item in the BI system need to be balanced with the often high costs of data creation. While the literature acknowledges this decision problem, no model-based approach to inform this decision has hitherto been proposed. The present research describes a prescriptive framework to prioritise data items for business analytics and applies it to human resources. To achieve this goal, the proposed framework captures core business activities in a comprehensive process map and assesses their relative importance and possible data support with multi-criteria decision analysis.

arXiv

This paper analyzes the connection between innovation activities of companies -- implemented before a financial crisis -- and their performance -- measured after such a time of crisis. Pertinent data about companies listed in the STAR Market Segment of the Italian Stock Exchange is analyzed. Innovation is measured through the level of investments in total tangible and intangible fixed assets in 2006-2007, while performance is captured through growth -- expressed by variations of sales or of total assets, -- profitability -- through ROI or ROS evolution, - and productivity -- through asset turnover or sales/employee in the period 2008-2010. The variables of interest are analyzed and compared through statistical techniques and by adopting a cluster analysis. In particular, a Voronoi tessellation is implemented in a varying centroids framework. In accord with a large part of the literature, we find that the behavior of the performance of the companies is not univocal when they innovate. The statistical outliers are the best cases in order to suggest efficient strategies. In brief, it is found that a positive rate of investments is preferable.

arXiv

In banking practice, rating transition matrices have become the standard approach of deriving multi-year probabilities of default (PDs) from one-year PDs, the latter normally being available from Basel ratings. Rating transition matrices have gained in importance with the newly adopted IFRS 9 accounting standard. Here, the multi-year PDs can be used to calculate the so-called expected credit losses (ECL) over the entire lifetime of relevant credit assets. A typical approach for estimating the rating transition matrices relies on calculating empirical rating migration counts and frequencies from rating history data. For small portfolios, however, this approach often leads to zero counts and high count volatility, which makes the estimations unreliable and unstable, and can also produce counter-intuitive prediction patterns such as non-parallel/crossing forward PD patterns. This paper proposes a structural model which overcomes these problems. We make a plausible assumption of an underlying autoregressive mean-reverting ability-to-pay process. With only three parameters, this sparse process can well describe an entire typical rating transition matrix, provided the one-year PDs of the rating classes are specified. The transition probabilities produced by the structural approach are well-behaved by design. The approach significantly reduces the statistical degrees of freedom of the estimated transition probabilities, which makes the rating transition matrix more reliable for small portfolios. The approach can be applied to data with as few as 50 observed rating transitions. Moreover, the approach can be efficiently applied to data consisting of continuous PDs (prior to rating discretization). In the IFRS 9 context, the approach offers an additional merit: it can easily account for the macroeconomic adjustments, which are required by the IFRS 9 accounting standard.

arXiv

Local SDG action is imperative to reach the 2030 Agenda, but different strategies for progressing on one SDG locally may cause different 'spillovers' on the same and other SDGs beyond local and national borders. We call for research efforts to empower local authorities to 'account globally' when acting locally.

arXiv

Research shows that women volunteer significantly more for tasks that people prefer others to complete. Such tasks carry little monetary incentives because of their very nature. We use a modified version of the volunteer's dilemma game to examine if non-monetary interventions, particularly, social recognition can be used to change the gender norms associated with such tasks. We design three treatments, where a) a volunteer receives positive social recognition, b) a non-volunteer receives negative social recognition, and c) a volunteer receives positive, but a non-volunteer receives negative social recognition. Our results indicate that competition for social recognition increases the overall likelihood that someone in a group has volunteered. Positive social recognition closes the gender gap observed in the baseline treatment, so does the combination of positive and negative social recognition. Our results, consistent with the prior literature on gender differences in competition, suggest that public recognition of volunteering can change the default gender norms in organizations and increase efficiency at the same time.

arXiv

In multi-criteria decision analysis workshops, participants often appraise the options individually before discussing the scoring as a group. The individual appraisals lead to score ranges within which the group then seeks the necessary agreement to identify their preferred option. Preference programming enables some options to be identified as dominated even before the group agrees on a precise scoring for them. Workshop participants usually face time pressure to make a decision. Decision support can be provided by flagging options for which further agreement on their scores seems particularly valuable. By valuable, we mean the opportunity to identify other options as dominated (using preference programming) without having their precise scores agreed beforehand. The present paper quantifies this Value of Agreement and extends the concept to portfolio decision analysis and criterion weights. The new concept is validated through a case study in recruitment.

arXiv

Vanna-Volga is a popular method for the interpolation/extrapolation of volatility smiles. The technique is widely used in the FX markets context, due to its ability to consistently construct the entire Lognormal smile using only three Lognormal market quotes. However, the derivation of the Vanna-Volga method itself is free of distributional assumptions. With this is mind, it is surprising there have been no attempts to apply the method to Normal volatilities (the current standard for interest rate markets). We show how the method can be modified to build Normal volatility smiles. As it turns out, only minor modifications are required compared to the Lognormal case. Moreover, as the inversion of Normal volatilities from option prices is easier in the Normal case, the smile construction can occur at a machine-precision level using analytical formulae, making the approximations via Taylor-series unnecessary. Apart from being based on practical and intuitive hedging arguments, the Vanna-Volga has further important advantages. In comparison to the Normal SABR model, the Vanna-Volga can easily fit both classical convex and atypical concave smiles (frowns). Concave smile patterns are sometimes observed around ATM strikes in the interest rate markets, particularly in the situations of anticipated jumps (with an unclear outcome) in interest rates. Besides, concavity is often observed towards the lower/left end of the Normal volatility smiles of interest rates. At least in these situations, the Vanna-Volga can be expected to interpolate/extrapolate better than SABR.

arXiv

We use the Newcomb-Benford law to test if countries have manipulated reported data during the COVID-19 pandemic. We find that democratic countries, countries with the higher gross domestic product (GDP) per capita, higher healthcare expenditures, and better universal healthcare coverage are less likely to deviate from the Newcomb-Benford law. The relationship holds for the cumulative number of reported deaths and total cases but is more pronounced for the death toll. The findings are robust for second-digit tests, for a sub-sample of countries with regional data, and in relation to the previous swine flu (H1N1) 2009-2010 pandemic. The paper further highlights the importance of independent surveillance data verification projects.