Diffusion Policies for Out-of-Distribution Generalization in Offline Reinforcement Learning
Autor: | Ada, Suzan Ece, Oztop, Erhan, Ugur, Emre |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | IEEE Robotics and Automation Letters 4 (2024) 3116 - 3123 |
Druh dokumentu: | Working Paper |
DOI: | 10.1109/LRA.2024.3363530 |
Popis: | Offline Reinforcement Learning (RL) methods leverage previous experiences to learn better policies than the behavior policy used for data collection. In contrast to behavior cloning, which assumes the data is collected from expert demonstrations, offline RL can work with non-expert data and multimodal behavior policies. However, offline RL algorithms face challenges in handling distribution shifts and effectively representing policies due to the lack of online interaction during training. Prior work on offline RL uses conditional diffusion models to represent multimodal behavior in the dataset. Nevertheless, these methods are not tailored toward alleviating the out-of-distribution state generalization. We introduce a novel method named State Reconstruction for Diffusion Policies (SRDP), incorporating state reconstruction feature learning in the recent class of diffusion policies to address the out-of-distribution generalization problem. State reconstruction loss promotes generalizable representation learning of states to alleviate the distribution shift incurred by the out-of-distribution (OOD) states. We design a novel 2D Multimodal Contextual Bandit environment to illustrate the OOD generalization and faster convergence of SRDP compared to prior algorithms. In addition, we assess the performance of our model on D4RL continuous control benchmarks, namely the navigation of an 8-DoF ant and forward locomotion of half-cheetah, hopper, and walker2d, achieving state-of-the-art results. Comment: 8 pages, 7 figures |
Databáze: | arXiv |
Externí odkaz: |