Improving the Robustness of Reinforcement Learning Policies With ${\mathcal {L}_{1}}$ Adaptive Control
Autor: | Yikun Cheng, Pan Zhao, Fanxin Wang, Daniel J. Block, Naira Hovakimyan |
---|---|
Rok vydání: | 2022 |
Předmět: |
FOS: Computer and information sciences
Human-Computer Interaction Computer Science - Robotics Control and Optimization Artificial Intelligence Control and Systems Engineering Mechanical Engineering Biomedical Engineering Computer Vision and Pattern Recognition Robotics (cs.RO) Computer Science Applications |
Zdroj: | IEEE Robotics and Automation Letters. 7:6574-6581 |
ISSN: | 2377-3774 |
DOI: | 10.1109/lra.2022.3169309 |
Popis: | A reinforcement learning (RL) control policy could fail in a new/perturbed environment that is different from the training environment, due to the presence of dynamic variations. For controlling systems with continuous state and action spaces, we propose an add-on approach to robustifying a pre-trained RL policy by augmenting it with an $\mathcal{L}_{1}$ adaptive controller ($\mathcal{L}_{1}$AC). Leveraging the capability of an $\mathcal{L}_{1}$AC for fast estimation and active compensation of dynamic variations, the proposed approach can improve the robustness of an RL policy which is trained either in a simulator or in the real world without consideration of a broad class of dynamic variations. Numerical and real-world experiments empirically demonstrate the efficacy of the proposed approach in robustifying RL policies trained using both model-free and model-based methods. Included extended work for the journal version https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9761728. arXiv admin note: substantial text overlap with arXiv:2106.02249 |
Databáze: | OpenAIRE |
Externí odkaz: |