Contextual Policy Transfer in Reinforcement Learning Domains via Deep Mixtures-of-Experts

Autor: Gimelfarb, Michael, Sanner, Scott, Lee, Chi-Guhn
Rok vydání: 2020
Předmět:
Druh dokumentu: Working Paper
Popis: In reinforcement learning, agents that consider the context, or current state, when selecting source policies for transfer have been shown to outperform context-free approaches. However, none of the existing approaches transfer knowledge contextually from model-based learners to a model-free learner. This could be useful, for instance, when source policies are intentionally learned on diverse simulations with plentiful data but transferred to a real-world setting with limited data. In this paper, we assume knowledge of estimated source task dynamics and policies, and common sub-goals but different dynamics. We introduce a novel deep mixture-of-experts formulation for learning state-dependent beliefs over source task dynamics that match the target dynamics using state trajectories collected from the target task. The mixture model is easy to interpret, demonstrates robustness to estimation errors in dynamics, and is compatible with most learning algorithms. We then show how this model can be incorporated into standard policy reuse frameworks, and demonstrate its effectiveness on benchmarks from OpenAI-Gym.
Comment: - updated experiment for Lander domain (fixed a bug in the UCB baseline) - minor editing and formatting, fixing typos - new template - 15 pages, 6 figures
Databáze: arXiv