Distributed Learning in Wireless Networks: Recent Progress and Future Challenges

Autor: Deniz Gunduz, Walid Saad, Kaibin Huang, Mingzhe Chen, Mehdi Bennis, Aneta Vulgarakis Feljan, H. Vincent Poor
Přispěvatelé: Commission of the European Communities, Engineering & Physical Science Research Council (EPSRC)
Rok vydání: 2021
Předmět:
FOS: Computer and information sciences
Technology
Computer Science - Machine Learning
Information privacy
Computer science
Distributed computing
02 engineering and technology
ALLOCATION
Machine Learning (cs.LG)
Data modeling
Engineering
DESIGN
COMMUNICATION-EFFICIENT
0202 electrical engineering
electronic engineering
information engineering

federated distillation
Reinforcement learning
Wireless networks
wireless edge networks
Measurement
federated learning
Wireless network
POWER-CONTROL
Data models
0906 Electrical and Electronic Engineering
Telecommunications
Performance evaluation
Enhanced Data Rates for GSM Evolution
Networking & Telecommunications
STOCHASTIC GRADIENT DESCENT
Edge device
multi-agent reinforcement learning
Computer Networks and Communications
Computer Science - Information Theory
0805 Distributed Computing
UNCODED TRANSMISSION
THE-AIR COMPUTATION
1005 Communications Technologies
Training
Overhead (computing)
Wireless
Distance learning
Electrical and Electronic Engineering
6G
Distributed learning
Science & Technology
business.industry
Information Theory (cs.IT)
distributed inference
Engineering
Electrical & Electronic

020206 networking & telecommunications
Computer aided instruction
business
Zdroj: IEEE Journal on Selected Areas in Communications. 39:3579-3605
ISSN: 1558-0008
0733-8716
DOI: 10.1109/jsac.2021.3118346
Popis: The next-generation of wireless networks will enable many machine learning (ML) tools and applications to efficiently analyze various types of data collected by edge devices for inference, autonomy, and decision making purposes. However, due to resource constraints, delay limitations, and privacy challenges, edge devices cannot offload their entire collected datasets to a cloud server for centrally training their ML models or inference purposes. To overcome these challenges, distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges, thus reducing the communication overhead and latency as well as improving data privacy. However, deploying distributed learning over wireless networks faces several challenges including the uncertain wireless environment (e.g., dynamic channel and interference), limited wireless resources (e.g., transmit power and radio spectrum), and hardware resources (e.g., computational power). This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks. We present a detailed overview of several emerging distributed learning paradigms, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning. For each learning framework, we first introduce the motivation for deploying it over wireless networks. Then, we present a detailed literature review on the use of communication techniques for its efficient deployment. We then introduce an illustrative example to show how to optimize wireless networks to improve its performance. Finally, we introduce future research opportunities. In a nutshell, this paper provides a holistic set of guidelines on how to deploy a broad range of distributed learning frameworks over real-world wireless communication networks.
Databáze: OpenAIRE