Learning for Multi-robot Cooperation in Partially Observable Stochastic Environments with Macro-actions
Autor: | Jonathan P. How, Christopher Amato, Miao Liu, Kavinayan Sivakumar, Shayegan Omidshafiei |
---|---|
Přispěvatelé: | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics, Massachusetts Institute of Technology. Department of Mechanical Engineering, Massachusetts Institute of Technology. Laboratory for Information and Decision Systems, Liu, Miao, Sivakumar, Kavinayan P, Omidshafiei, Shayegan, How, Jonathan P |
Jazyk: | angličtina |
Rok vydání: | 2017 |
Předmět: |
FOS: Computer and information sciences
0209 industrial biotechnology Computer science Distributed computing Sampling (statistics) Observable 02 engineering and technology Machine Learning (cs.LG) Computer Science - Learning Computer Science - Robotics 020901 industrial engineering & automation Asynchronous communication 0202 electrical engineering electronic engineering information engineering Trajectory Robot 020201 artificial intelligence & image processing Computer Science - Multiagent Systems Markov decision process Sensitivity (control systems) Macro Robotics (cs.RO) Search and rescue Multiagent Systems (cs.MA) |
Zdroj: | IROS arXiv |
Popis: | This paper presents a data-driven approach for multi-robot coordination in partially-observable domains based on Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) and macro-actions (MAs). Dec-POMDPs provide a general framework for cooperative sequential decision making under uncertainty and MAs allow temporally extended and asynchronous action execution. To date, most methods assume the underlying Dec-POMDP model is known a priori or a full simulator is available during planning time. Previous methods which aim to address these issues suffer from local optimality and sensitivity to initial conditions. Additionally, few hardware demonstrations involving a large team of heterogeneous robots and with long planning horizons exist. This work addresses these gaps by proposing an iterative sampling based Expectation-Maximization algorithm (iSEM) to learn polices using only trajectory data containing observations, MAs, and rewards. Our experiments show the algorithm is able to achieve better solution quality than the state-of-the-art learning-based methods. We implement two variants of multi-robot Search and Rescue (SAR) domains (with and without obstacles) on hardware to demonstrate the learned policies can effectively control a team of distributed robots to cooperate in a partially observable stochastic environment. Accepted to the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) |
Databáze: | OpenAIRE |
Externí odkaz: |