Symmetric Q-learning: Reducing Skewness of Bellman Error in Online Reinforcement Learning

Autor: Omura, Motoki, Osa, Takayuki, Mukuta, Yusuke, Harada, Tatsuya
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
DOI: 10.1609/aaai.v38i13.29362
Popis: In deep reinforcement learning, estimating the value function to evaluate the quality of states and actions is essential. The value function is often trained using the least squares method, which implicitly assumes a Gaussian error distribution. However, a recent study suggested that the error distribution for training the value function is often skewed because of the properties of the Bellman operator, and violates the implicit assumption of normal error distribution in the least squares method. To address this, we proposed a method called Symmetric Q-learning, in which the synthetic noise generated from a zero-mean distribution is added to the target values to generate a Gaussian error distribution. We evaluated the proposed method on continuous control benchmark tasks in MuJoCo. It improved the sample efficiency of a state-of-the-art reinforcement learning method by reducing the skewness of the error distribution.
Comment: Accepted at AAAI 2024: The 38th Annual AAAI Conference on Artificial Intelligence (Main Tech Track)
Databáze: arXiv