Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing

Autor: Lari, Ehsan, Arablouei, Reza, Gogineni, Vinay Chakravarthi, Werner, Stefan
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Federated learning (FL) allows training machine learning models on distributed data without compromising privacy. However, FL is vulnerable to model-poisoning attacks where malicious clients tamper with their local models to manipulate the global model. In this work, we investigate the resilience of the partial-sharing online FL (PSO-Fed) algorithm against such attacks. PSO-Fed reduces communication overhead by allowing clients to share only a fraction of their model updates with the server. We demonstrate that this partial sharing mechanism has the added advantage of enhancing PSO-Fed's robustness to model-poisoning attacks. Through theoretical analysis, we show that PSO-Fed maintains convergence even under Byzantine attacks, where malicious clients inject noise into their updates. Furthermore, we derive a formula for PSO-Fed's mean square error, considering factors like stepsize, attack probability, and the number of malicious clients. Interestingly, we find a non-trivial optimal stepsize that maximizes PSO-Fed's resistance to these attacks. Extensive numerical experiments confirm our theoretical findings and showcase PSO-Fed's superior performance against model-poisoning attacks compared to other leading FL algorithms.
Comment: 13 pages, 9 figures, Submitted to TSIPN
Databáze: arXiv