Communication-efficient and federated multi-agent reinforcement learning
Autor: | Anis Elgabli, Mehdi Bennis, Mounssif Krouka, Chaouki Ben Issaid |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2022 |
Předmět: |
Scheme (programming language)
policy gradient reinforcement learning Analog transmission Computer Networks and Communications Computer science business.industry Distributed computing Bandwidth (signal processing) Task (computing) Analog signal Artificial Intelligence Hardware and Architecture analog communications Reinforcement learning Wireless business ADMM computer distributed optimization Communication channel computer.programming_language |
Zdroj: | IEEE Transactions on Cognitive Communications and Networking |
Popis: | In this paper, we consider a distributed reinforcement learning setting where agents are communicating with a central entity in a shared environment to maximize a global reward. A main challenge in this setting is that the randomness of the wireless channel perturbs each agent’s model update while multiple agents’ updates may cause interference when communicating under limited bandwidth. To address this issue, we propose a novel distributed reinforcement learning algorithm based on the alternating direction method of multipliers (ADMM) and “over air aggregation” using analog transmission scheme, referred to as A-RLADMM. Our algorithm incorporates the wireless channel into the formulation of the ADMM method, which enables agents to transmit each element of their updated models over the same channel using analog communication. Numerical experiments on a multi-agent collaborative navigation task show that our proposed algorithm significantly outperforms the digital communication baseline of A-RLADMM (DRLADMM), the lazily aggregated policy gradient (RL-LAPG), as well as the analog and the digital communication versions of the vanilla FL, (A-FRL) and (D-FRL) respectively. |
Databáze: | OpenAIRE |
Externí odkaz: |