Robust Risk-Aware Reinforcement Learning
Autor: | Sebastian Jaimungal, Silvana M. Pesenti, Ye Sheng Wang, Hariom Tatsat |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning History Polymers and Plastics Computational Finance (q-fin.CP) 01 natural sciences Industrial and Manufacturing Engineering Machine Learning (cs.LG) FOS: Economics and business 010104 statistics & probability Quantitative Finance - Computational Finance Portfolio Management (q-fin.PM) 0502 economics and business 0101 mathematics Business and International Management Quantitative Finance - Portfolio Management 91G70 91-10 91-08 90C17 93E35 Numerical Analysis Statistical Finance (q-fin.ST) 050208 finance Applied Mathematics 05 social sciences Quantitative Finance - Statistical Finance Risk Management (q-fin.RM) Finance Quantitative Finance - Risk Management |
Zdroj: | SSRN Electronic Journal. |
ISSN: | 1556-5068 |
DOI: | 10.2139/ssrn.3910498 |
Popis: | We present a reinforcement learning (RL) approach for robust optimisation of risk-aware performance criteria. To allow agents to express a wide variety of risk-reward profiles, we assess the value of a policy using rank dependent expected utility (RDEU). RDEU allows the agent to seek gains, while simultaneously protecting themselves against downside risk. To robustify optimal policies against model uncertainty, we assess a policy not by its distribution, but rather, by the worst possible distribution that lies within a Wasserstein ball around it. Thus, our problem formulation may be viewed as an actor/agent choosing a policy (the outer problem), and the adversary then acting to worsen the performance of that strategy (the inner problem). We develop explicit policy gradient formulae for the inner and outer problems, and show its efficacy on three prototypical financial problems: robust portfolio allocation, optimising a benchmark, and statistical arbitrage. 12 pages, 5 figures |
Databáze: | OpenAIRE |
Externí odkaz: |