SURREAL-System: Fully-Integrated Stack for Distributed Deep Reinforcement Learning
Autor: | Fan, Linxi, Zhu, Yuke, Zhu, Jiren, Liu, Zihua, Zeng, Orien, Gupta, Anchit, Creus-Costa, Joan, Savarese, Silvio, Fei-Fei, Li |
---|---|
Rok vydání: | 2019 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | We present an overview of SURREAL-System, a reproducible, flexible, and scalable framework for distributed reinforcement learning (RL). The framework consists of a stack of four layers: Provisioner, Orchestrator, Protocol, and Algorithms. The Provisioner abstracts away the machine hardware and node pools across different cloud providers. The Orchestrator provides a unified interface for scheduling and deploying distributed algorithms by high-level description, which is capable of deploying to a wide range of hardware from a personal laptop to full-fledged cloud clusters. The Protocol provides network communication primitives optimized for RL. Finally, the SURREAL algorithms, such as Proximal Policy Optimization (PPO) and Evolution Strategies (ES), can easily scale to 1000s of CPU cores and 100s of GPUs. The learning performances of our distributed algorithms establish new state-of-the-art on OpenAI Gym and Robotics Suites tasks. Comment: Technical report of the SURREAL system. See more details at https://surreal.stanford.edu |
Databáze: | arXiv |
Externí odkaz: |