Adaptive Discount Factor for Deep Reinforcement Learning in Continuing Tasks with Uncertainty

Autor: MyeongSeop Kim, Jung-Su Kim, Myoung-Su Choi, Jae-Han Park
Jazyk: angličtina
Rok vydání: 2022
Předmět:
Zdroj: Sensors, Vol 22, Iss 19, p 7266 (2022)
Druh dokumentu: article
ISSN: 1424-8220
DOI: 10.3390/s22197266
Popis: Reinforcement learning (RL) trains an agent by maximizing the sum of a discounted reward. Since the discount factor has a critical effect on the learning performance of the RL agent, it is important to choose the discount factor properly. When uncertainties are involved in the training, the learning performance with a constant discount factor can be limited. For the purpose of obtaining acceptable learning performance consistently, this paper proposes an adaptive rule for the discount factor based on the advantage function. Additionally, how to use the advantage function in both on-policy and off-policy algorithms is presented. To demonstrate the performance of the proposed adaptive rule, it is applied to PPO (Proximal Policy Optimization) for Tetris in order to validate the on-policy case, and to SAC (Soft Actor-Critic) for the motion planning of a robot manipulator to validate the off-policy case. In both cases, the proposed method results in a better or similar performance compared with cases using the best constant discount factors found by exhaustive search. Hence, the proposed adaptive discount factor automatically finds a discount factor that leads to comparable training performance, and that can be applied to representative deep reinforcement learning problems.
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje