Bayesian Deep Reinforcement Learning Algorithm for Solving Deep Exploration Problems

Autor: YANG Min, WANG Jie
Jazyk: čínština
Rok vydání: 2020
Předmět:
Zdroj: Jisuanji kexue yu tansuo, Vol 14, Iss 2, Pp 307-316 (2020)
Druh dokumentu: article
ISSN: 1673-9418
DOI: 10.3778/j.issn.1673-9418.1901020
Popis: In the field of reinforcement learning, how to balance the relationship between exploration and exploi-tation is a hard problem. The reinforcement learning method proposed in recent years mainly focuses on how to combine the deep learning technology to improve the generalization ability of the algorithm, but ignores the explo-ration-exploitation dilemma. The traditional reinforcement learning method can effectively solve the exploration problem, but there are certain restrictions: the state space of the Markov decision process must be discrete and limited. In this paper, the Bayesian method is proposed to improve the efficiency of deep reinforcement algorithm. And the main contribution is to extend the method of calculating the posterior distribution of parameters in Bayesian linear regression to nonlinear models such as artificial neural networks. By combining Bootstrapped DQN (deep Q-network) and the computational method proposed in this paper, Bayesian Bootstrapped DQN (BBDQN) is obtained. Finally, the results of the experiments in two environments show that BBDQN is more efficient than DQN and Bootstrapped DQN in the face of deep exploration.
Databáze: Directory of Open Access Journals