G-Learner and GIRL: Goal Based Wealth Management with Reinforcement Learning
Autor: | Matthew Dixon, Igor Halperin |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
Consumption (economics)
FOS: Computer and information sciences Computer Science - Machine Learning Computer science business.industry Test data generation Process (engineering) Probabilistic logic Computational Finance (q-fin.CP) Machine Learning (stat.ML) Financial goal Linear-quadratic regulator Machine Learning (cs.LG) FOS: Economics and business Quantitative Finance - Computational Finance Portfolio Management (q-fin.PM) Statistics - Machine Learning Reinforcement learning Portfolio Artificial intelligence business Quantitative Finance - Portfolio Management |
Popis: | We present a reinforcement learning approach to goal based wealth management problems such as optimization of retirement plans or target dated funds. In such problems, an investor seeks to achieve a financial goal by making periodic investments in the portfolio while being employed, and periodically draws from the account when in retirement, in addition to the ability to re-balance the portfolio by selling and buying different assets (e.g. stocks). Instead of relying on a utility of consumption, we present G-Learner: a reinforcement learning algorithm that operates with explicitly defined one-step rewards, does not assume a data generation process, and is suitable for noisy data. Our approach is based on G-learning (Fox et al., 2015) - a probabilistic extension of the Q-learning method of reinforcement learning. In this paper, we demonstrate how G-learning, when applied to a quadratic reward and Gaussian reference policy, gives an entropy-regulated Linear Quadratic Regulator (LQR). This critical insight provides a novel and computationally tractable tool for wealth management tasks which scales to high dimensional portfolios. In addition to the solution of the direct problem of G-learning, we also present a new algorithm, GIRL, that extends our goal-based G-learning approach to the setting of Inverse Reinforcement Learning (IRL) where rewards collected by the agent are not observed, and should instead be inferred. We demonstrate that GIRL can successfully learn the reward parameters of a G-Learner agent and thus imitate its behavior. Finally, we discuss potential applications of the G-Learner and GIRL algorithms for wealth management and robo-advising. |
Databáze: | OpenAIRE |
Externí odkaz: |