On-Policy Robot Imitation Learning from a Converging Supervisor
Autor: | Balakrishna, Ashwin, Thananjeyan, Brijen, Lee, Jonathan, Li, Felix, Zahed, Arsh, Gonzalez, Joseph E., Goldberg, Ken |
---|---|
Rok vydání: | 2019 |
Předmět: | |
Zdroj: | 3rd Conference on Robot Learning (CoRL 2019) |
Druh dokumentu: | Working Paper |
Popis: | Existing on-policy imitation learning algorithms, such as DAgger, assume access to a fixed supervisor. However, there are many settings where the supervisor may evolve during policy learning, such as a human performing a novel task or an improving algorithmic controller. We formalize imitation learning from a "converging supervisor" and provide sublinear static and dynamic regret guarantees against the best policy in hindsight with labels from the converged supervisor, even when labels during learning are only from intermediate supervisors. We then show that this framework is closely connected to a class of reinforcement learning (RL) algorithms known as dual policy iteration (DPI), which alternate between training a reactive learner with imitation learning and a model-based supervisor with data from the learner. Experiments suggest that when this framework is applied with the state-of-the-art deep model-based RL algorithm PETS as an improving supervisor, it outperforms deep RL baselines on continuous control tasks and provides up to an 80-fold speedup in policy evaluation. Comment: Conference on Robot Learning (CoRL) 2019 Oral. First two authors contributed equally |
Databáze: | arXiv |
Externí odkaz: |