Zobrazeno 1 - 10
of 2 073
pro vyhledávání: '"Pinot A"'
Federated learning (FL) is an appealing paradigm that allows a group of machines (a.k.a. clients) to learn collectively while keeping their data local. However, due to the heterogeneity between the clients' data distributions, the model obtained thro
Externí odkaz:
http://arxiv.org/abs/2409.20329
Autor:
Guerraoui, Rachid, Kermarrec, Anne-Marie, Kucherenko, Anastasiia, Pinot, Rafael, de Vos, Martijn
The ability of a peer-to-peer (P2P) system to effectively host decentralized applications often relies on the availability of a peer-sampling service, which provides each participant with a random sample of other peers. Despite the practical effectiv
Externí odkaz:
http://arxiv.org/abs/2408.03829
Batch normalization has proven to be a very beneficial mechanism to accelerate the training and improve the accuracy of deep neural networks in centralized environments. Yet, the scheme faces significant challenges in federated learning, especially u
Externí odkaz:
http://arxiv.org/abs/2405.14670
The success of machine learning (ML) has been intimately linked with the availability of large amounts of data, typically collected from heterogeneous sources and processed on vast networks of computing devices (also called {\em workers}). Beyond acc
Externí odkaz:
http://arxiv.org/abs/2405.00491
Autor:
Allouah, Youssef, Farhadkhani, Sadegh, GuerraouI, Rachid, Gupta, Nirupam, Pinot, Rafael, Rizk, Geovani, Voitovych, Sasha
The possibility of adversarial (a.k.a., {\em Byzantine}) clients makes federated learning (FL) prone to arbitrary manipulation. The natural approach to robustify FL against adversarial clients is to replace the simple averaging operation at the serve
Externí odkaz:
http://arxiv.org/abs/2402.12780
The theory underlying robust distributed learning algorithms, designed to resist adversarial machines, matches empirical observations when data is homogeneous. Under data heterogeneity however, which is the norm in practical scenarios, established lo
Externí odkaz:
http://arxiv.org/abs/2309.13591
Autor:
Choffrut, Antoine, Guerraoui, Rachid, Pinot, Rafael, Sirdey, Renaud, Stephan, John, Zuber, Martin
Due to the widespread availability of data, machine learning (ML) algorithms are increasingly being implemented in distributed topologies, wherein various nodes collaborate to train ML models via the coordination of a central server. However, distrib
Externí odkaz:
http://arxiv.org/abs/2309.05395
Autor:
Guerraoui, Rachid, Kermarrec, Anne-Marie, Kucherenko, Anastasiia, Pinot, Rafael, Voitovych, Sasha
Detecting the source of a gossip is a critical issue, related to identifying patient zero in an epidemic, or the origin of a rumor in a social network. Although it is widely acknowledged that random and local gossip communications make source identif
Externí odkaz:
http://arxiv.org/abs/2308.02477
The ubiquity of distributed machine learning (ML) in sensitive public domain applications calls for algorithms that protect data privacy, while being robust to faults and adversarial behaviors. Although privacy and robustness have been extensively st
Externí odkaz:
http://arxiv.org/abs/2302.04787
Autor:
Allouah, Youssef, Farhadkhani, Sadegh, Guerraoui, Rachid, Gupta, Nirupam, Pinot, Rafael, Stephan, John
Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or Byzantine) machines. Although this problem received significant attention, prior works often assume the data held by the machines to b
Externí odkaz:
http://arxiv.org/abs/2302.01772