Efficient Preference-based Reinforcement Learning via Aligned Experience Estimation

Autor: Bai, Fengshuo, Zhao, Rui, Zhang, Hongming, Cui, Sijia, Wen, Ying, Yang, Yaodong, Xu, Bo, Han, Lei
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Preference-based reinforcement learning (PbRL) has shown impressive capabilities in training agents without reward engineering. However, a notable limitation of PbRL is its dependency on substantial human feedback. This dependency stems from the learning loop, which entails accurate reward learning compounded with value/policy learning, necessitating a considerable number of samples. To boost the learning loop, we propose SEER, an efficient PbRL method that integrates label smoothing and policy regularization techniques. Label smoothing reduces overfitting of the reward model by smoothing human preference labels. Additionally, we bootstrap a conservative estimate $\widehat{Q}$ using well-supported state-action pairs from the current replay memory to mitigate overestimation bias and utilize it for policy learning regularization. Our experimental results across a variety of complex tasks, both in online and offline settings, demonstrate that our approach improves feedback efficiency, outperforming state-of-the-art methods by a large margin. Ablation studies further reveal that SEER achieves a more accurate Q-function compared to prior work.
Databáze: arXiv