Towards a Distributed Framework for Multi-Agent Reinforcement Learning Research
Autor: | Sheng Li, Jaime Pena, Ross Allen, Yutai Zhou, Peter Morales, Shawn Manuel |
---|---|
Rok vydání: | 2020 |
Předmět: |
050210 logistics & transportation
Computer science Computation Distributed computing Scale (chemistry) 05 social sciences 010501 environmental sciences Supercomputer 01 natural sciences Field (computer science) 0502 economics and business Reinforcement learning State (computer science) 0105 earth and related environmental sciences |
Zdroj: | HPEC |
DOI: | 10.1109/hpec43674.2020.9286212 |
Popis: | Some of the most important publications in deep reinforcement learning over the last few years have been fueled by access to massive amounts of computation through large scale distributed systems. The success of these approaches in achieving human-expert level performance on several complex video-game environments has motivated further exploration into the limits of these approaches as computation increases. In this paper, we present a distributed RL training framework designed for super computing infrastructures such as the MIT SuperCloud. We review a collection of challenging learning environments-such as Google Research Football, StarCraft II, and Multi-Agent Mujoco- which are at the frontier of reinforcement learning research. We provide results on these environments that illustrate the current state of the field on these problems. Finally, we also quantify and discuss the computational requirements needed for performing RL research by enumerating all experiments performed on these environments. |
Databáze: | OpenAIRE |
Externí odkaz: |