Autor: |
Kommaraju, Ananda V., Maschhoff, Kristyn J., Ringenburg, Michael F., Robbins, Benjamin |
Předmět: |
|
Zdroj: |
Concurrency & Computation: Practice & Experience; 10/25/2020, Vol. 32 Issue 20, p1-12, 12p |
Abstrakt: |
Summary: Recent advancements in deep learning have made reinforcement learning (RL) applicable to a much broader range of decision making problems. However, the emergence of reinforcement learning workloads brings multiple challenges to system resource management. RL applications continuously train a deep learning or a machine learning model while interacting with uncertain simulation models. This new generation of AI applications impose significant demands on system resources such as memory, storage, network, and compute. In this paper, we describe a typical RL application workflow and introduce the Ray distributed execution framework developed at the UC Berkeley RISELab. Ray includes the RLlib library for executing distributed reinforcement learning applications. We describe a recipe for deploying the Ray execution framework on Cray XC systems and demonstrate scaling of RLlib algorithms across multiple nodes of the system. We also explore performance characteristics across multiple CPU and GPU node types. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|