Zobrazeno 1 - 10
of 19
pro vyhledávání: '"Kameni, Laetitia"'
Monitoring, understanding, and optimizing the energy consumption of Machine Learning (ML) are various reasons why it is necessary to evaluate the energy usage of ML. However, there exists no universal tool that can answer this question for all use ca
Externí odkaz:
http://arxiv.org/abs/2408.15128
Federated learning (FL) is an effective solution to train machine learning models on the increasing amount of data generated by IoT devices and smartphones while keeping such data localized. Most previous work on federated learning assumes that clien
Externí odkaz:
http://arxiv.org/abs/2301.01542
Autor:
Fraboni, Yann, Van Waerebeke, Martin, Scaman, Kevin, Vidal, Richard, Kameni, Laetitia, Lorenzi, Marco
Machine Unlearning (MU) is an increasingly important topic in machine learning safety, aiming at removing the contribution of a given data point from a training procedure. Federated Unlearning (FU) consists in extending MU to unlearn a given client's
Externí odkaz:
http://arxiv.org/abs/2211.11656
We propose a novel framework to study asynchronous federated learning optimization with delays in gradient updates. Our theoretical framework extends the standard FedAvg aggregation scheme by introducing stochastic aggregation weights to represent th
Externí odkaz:
http://arxiv.org/abs/2206.10189
Federated learning allows clients to collaboratively learn statistical models while keeping their data local. Federated learning was originally used to train a unique global model to be served to all clients, but this approach might be sub-optimal wh
Externí odkaz:
http://arxiv.org/abs/2111.09360
The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL), a framework for on-device collaborative training of machine learning models. First efforts in FL focused on learning a single g
Externí odkaz:
http://arxiv.org/abs/2108.10252
While client sampling is a central operation of current state-of-the-art federated learning (FL) approaches, the impact of this procedure on the convergence and speed of FL remains under-investigated. In this work, we provide a general theoretical fr
Externí odkaz:
http://arxiv.org/abs/2107.12211
This work addresses the problem of optimizing communications between server and clients in federated learning (FL). Current sampling approaches in FL are either biased, or non optimal in terms of server-clients communications and training stability.
Externí odkaz:
http://arxiv.org/abs/2105.05883
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
International Workshop on Trustworthy Federated Learning in Conjunction with IJCAI 2022 (FL-IJCAI'22)
International Workshop on Trustworthy Federated Learning in Conjunction with IJCAI 2022 (FL-IJCAI'22), Jul 2022, Vienna, Austria
International Workshop on Trustworthy Federated Learning in Conjunction with IJCAI 2022 (FL-IJCAI'22), Jul 2022, Vienna, Austria
International audience; While client sampling is a central operation of current state-of-the-art federated learning (FL) approaches, the impact of this procedure on the convergence and speed of FL remains under-investigated. In this work, we provide
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::8436a2d586daef422d873e2d0cc0872b
https://hal.science/hal-03500307v2/document
https://hal.science/hal-03500307v2/document