Zobrazeno 1 - 10
of 200
pro vyhledávání: '"Condat, Laurent"'
In the recent paradigm of Federated Learning (FL), multiple clients train a shared model while keeping their local data private. Resource constraints of clients and communication costs pose major problems for training large models in FL. On the one h
Externí odkaz:
http://arxiv.org/abs/2405.20623
Autor:
Condat, Laurent, Richtárik, Peter
Point-SAGA is a randomized algorithm for minimizing a sum of convex functions using their proximity operators (proxs), proposed by Defazio (2016). At every iteration, the prox of only one randomly chosen function is called. We generalize the algorith
Externí odkaz:
http://arxiv.org/abs/2405.19951
Monotone inclusions have a wide range of applications, including minimization, saddle-point, and equilibria problems. We introduce new stochastic algorithms, with or without variance reduction, to estimate a root of the expectation of possibly set-va
Externí odkaz:
http://arxiv.org/abs/2405.14255
Federated Learning (FL) has garnered increasing attention due to its unique characteristic of allowing heterogeneous clients to process their private data locally and interact with a central server, while being respectful of privacy. A critical bottl
Externí odkaz:
http://arxiv.org/abs/2403.09904
In Distributed optimization and Learning, and even more in the modern framework of federated learning, communication, which is slow and costly, is critical. We introduce LoCoDL, a communication-efficient algorithm that leverages the two popular and e
Externí odkaz:
http://arxiv.org/abs/2403.04348
The ProxSkip algorithm for decentralized and federated learning is gaining increasing attention due to its proven benefits in accelerating communication complexity while maintaining robustness against data heterogeneity. However, existing analyses of
Externí odkaz:
http://arxiv.org/abs/2310.07983
Looking for sparsity is nowadays crucial to speed up the training of large-scale neural networks. Projections onto the $\ell_{1,2}$ and $\ell_{1,\infty}$ are among the most efficient techniques to sparsify and reduce the overall cost of neural networ
Externí odkaz:
http://arxiv.org/abs/2307.09836
Explicit Personalization and Local Training: Double Communication Acceleration in Federated Learning
Federated Learning is an evolving machine learning paradigm, in which multiple clients perform computations based on their individual private data, interspersed by communication with a remote server. A common strategy to curtail communication costs i
Externí odkaz:
http://arxiv.org/abs/2305.13170
In distributed optimization and learning, several machines alternate between local computations in parallel and communication with a distant server. Communication is usually slow and costly and forms the main bottleneck. This is particularly true in
Externí odkaz:
http://arxiv.org/abs/2302.09832
In federated learning, a large number of users are involved in a global learning task, in a collaborative way. They alternate local computations and two-way communication with a distant orchestrating server. Communication, which can be slow and costl
Externí odkaz:
http://arxiv.org/abs/2210.13277