Towards a Distributed Framework for Multi-Agent Reinforcement Learning Research

Autor: Sheng Li, Jaime Pena, Ross Allen, Yutai Zhou, Peter Morales, Shawn Manuel
Rok vydání: 2020
Předmět:
Zdroj: HPEC
DOI: 10.1109/hpec43674.2020.9286212
Popis: Some of the most important publications in deep reinforcement learning over the last few years have been fueled by access to massive amounts of computation through large scale distributed systems. The success of these approaches in achieving human-expert level performance on several complex video-game environments has motivated further exploration into the limits of these approaches as computation increases. In this paper, we present a distributed RL training framework designed for super computing infrastructures such as the MIT SuperCloud. We review a collection of challenging learning environments-such as Google Research Football, StarCraft II, and Multi-Agent Mujoco- which are at the frontier of reinforcement learning research. We provide results on these environments that illustrate the current state of the field on these problems. Finally, we also quantify and discuss the computational requirements needed for performing RL research by enumerating all experiments performed on these environments.
Databáze: OpenAIRE