Autor: |
Jothimurugan, Kishor, Bansal, Suguman, Bastani, Osbert, Alur, Rajeev |
Rok vydání: |
2021 |
Předmět: |
|
Zdroj: |
35th Conference on Neural Information Processing Systems (NeurIPS 2021) |
Druh dokumentu: |
Working Paper |
Popis: |
We study the problem of learning control policies for complex tasks given by logical specifications. Recent approaches automatically generate a reward function from a given specification and use a suitable reinforcement learning algorithm to learn a policy that maximizes the expected reward. These approaches, however, scale poorly to complex tasks that require high-level planning. In this work, we develop a compositional learning approach, called DiRL, that interleaves high-level planning and reinforcement learning. First, DiRL encodes the specification as an abstract graph; intuitively, vertices and edges of the graph correspond to regions of the state space and simpler sub-tasks, respectively. Our approach then incorporates reinforcement learning to learn neural network policies for each edge (sub-task) within a Dijkstra-style planning algorithm to compute a high-level plan in the graph. An evaluation of the proposed approach on a set of challenging control benchmarks with continuous state and action spaces demonstrates that it outperforms state-of-the-art baselines. |
Databáze: |
arXiv |
Externí odkaz: |
|