Diff-DAC: Distributed Actor-Critic for Average Multitask Deep Reinforcement Learning

Autor: Macua, Sergio Valcarcel, Tukiainen, Aleksi, Hernández, Daniel García-Ocaña, Baldazo, David, de Cote, Enrique Munoz, Zazo, Santiago
Rok vydání: 2017
Předmět:
Zdroj: Presented at Adaptive Learning Agents workshop (ALA2018), July 14th, 2018, Stockholm, Sweden
Druh dokumentu: Working Paper
Popis: We propose a fully distributed actor-critic algorithm approximated by deep neural networks, named \textit{Diff-DAC}, with application to single-task and to average multitask reinforcement learning (MRL). Each agent has access to data from its local task only, but it aims to learn a policy that performs well on average for the whole set of tasks. During the learning process, agents communicate their value-policy parameters to their neighbors, diffusing the information across the network, so that they converge to a common policy, with no need for a central node. The method is scalable, since the computational and communication costs per agent grow with its number of neighbors. We derive Diff-DAC's from duality theory and provide novel insights into the standard actor-critic framework, showing that it is actually an instance of the dual ascent method that approximates the solution of a linear program. Experiments suggest that Diff-DAC can outperform the single previous distributed MRL approach (i.e., Dist-MTLPS) and even the centralized architecture.
Databáze: arXiv