Tighter Regret Analysis and Optimization of Online Federated Learning
Autor: | Kwon, Dohyeok, Park, Jonghwan, Hong, Songnam |
---|---|
Rok vydání: | 2022 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | In federated learning (FL), it is commonly assumed that all data are placed at clients in the beginning of machine learning (ML) optimization (i.e., offline learning). However, in many real-world applications, it is expected to proceed in an online fashion. To this end, online FL (OFL) has been introduced, which aims at learning a sequence of global models from decentralized streaming data such that the so-called cumulative regret is minimized. Combining online gradient descent and model averaging, in this framework, FedOGD is constructed as the counterpart of FedSGD in FL. While it can enjoy an optimal sublinear regret, FedOGD suffers from heavy communication costs. In this paper, we present a communication-efficient method (named OFedIQ) by means of intermittent transmission (enabled by client subsampling and periodic transmission) and quantization. For the first time, we derive the regret bound that captures the impact of data-heterogeneity and the communication-efficient techniques. Through this, we efficiently optimize the parameters of OFedIQ such as sampling rate, transmission period, and quantization levels. Also, it is proved that the optimized OFedIQ can asymptotically achieve the performance of FedOGD while reducing the communication costs by 99%. Via experiments with real datasets, we demonstrate the effectiveness of the optimized OFedIQ. Comment: v3. Compared to the previous version, tighter regret analysis and parameter optimization have been included. v4. Add comments |
Databáze: | arXiv |
Externí odkaz: |