Task-guided IRL in POMDPs that scales
Autor: | Franck Djeumou, Christian Ellis, Murat Cubuktepe, Craig Lennon, Ufuk Topcu |
---|---|
Rok vydání: | 2023 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Linguistics and Language Artificial Intelligence (cs.AI) Computer Science - Artificial Intelligence Formal Languages and Automata Theory (cs.FL) Optimization and Control (math.OC) Artificial Intelligence FOS: Mathematics Computer Science - Formal Languages and Automata Theory Mathematics - Optimization and Control Language and Linguistics Machine Learning (cs.LG) |
Zdroj: | Artificial Intelligence. 317:103856 |
ISSN: | 0004-3702 |
Popis: | In inverse reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts. However, many existing IRL techniques make the often unrealistic assumption that the agent has access to full information about the environment. We remove this assumption by developing an algorithm for IRL in partially observable Markov decision processes (POMDPs). We address two limitations of existing IRL techniques. First, they require an excessive amount of data due to the information asymmetry between the expert and the learner. Second, most of these IRL techniques require solving the computationally intractable forward problem -- computing an optimal policy given a reward function -- in POMDPs. The developed algorithm reduces the information asymmetry while increasing the data efficiency by incorporating task specifications expressed in temporal logic into IRL. Such specifications may be interpreted as side information available to the learner a priori in addition to the demonstrations. Further, the algorithm avoids a common source of algorithmic complexity by building on causal entropy as the measure of the likelihood of the demonstrations as opposed to entropy. Nevertheless, the resulting problem is nonconvex due to the so-called forward problem. We solve the intrinsic nonconvexity of the forward problem in a scalable manner through a sequential linear programming scheme that guarantees to converge to a locally optimal policy. In a series of examples, including experiments in a high-fidelity Unity simulator, we demonstrate that even with a limited amount of data and POMDPs with tens of thousands of states, our algorithm learns reward functions and policies that satisfy the task while inducing similar behavior to the expert by leveraging the provided side information. Comment: Final submission to the Artificial Intelligence journal (Elsevier). arXiv admin note: substantial text overlap with arXiv:2105.14073 |
Databáze: | OpenAIRE |
Externí odkaz: |