Zobrazeno 1 - 10
of 59
pro vyhledávání: '"Levy, Kfir Y"'
Autor:
Reshef, Roie, Levy, Kfir Y.
This paper addresses the challenge of preserving privacy in Federated Learning (FL) within centralized systems, focusing on both trusted and untrusted server scenarios. We analyze this setting within the Stochastic Convex Optimization (SCO) framework
Externí odkaz:
http://arxiv.org/abs/2407.12396
Autor:
Dahan, Tehila, Levy, Kfir Y.
In this paper, we investigate the challenging framework of Byzantine-robust training in distributed machine learning (ML) systems, focusing on enhancing both efficiency and practicality. As distributed ML systems become integral for complex ML tasks,
Externí odkaz:
http://arxiv.org/abs/2405.14759
We present the first finite time global convergence analysis of policy gradient in the context of infinite horizon average reward Markov decision processes (MDPs). Specifically, we focus on ergodic tabular MDPs with finite state and action spaces. Ou
Externí odkaz:
http://arxiv.org/abs/2403.06806
Byzantine-robust learning has emerged as a prominent fault-tolerant distributed machine learning framework. However, most techniques focus on the static setting, wherein the identity of Byzantine workers remains unchanged throughout the learning proc
Externí odkaz:
http://arxiv.org/abs/2402.02951
Autor:
Khodak, Mikhail, Osadchiy, Ilya, Harris, Keegan, Balcan, Maria-Florina, Levy, Kfir Y., Meir, Ron, Wu, Zhiwei Steven
We study online meta-learning with bandit feedback, with the goal of improving performance across multiple tasks if they are similar according to some natural similarity measure. As the first to target the adversarial online-within-online partial-inf
Externí odkaz:
http://arxiv.org/abs/2307.02295
Autor:
Levy, Kfir Y.
We consider stochastic convex optimization problems where the objective is an expectation over smooth functions. For this setting we suggest a novel gradient estimate that combines two recent mechanism that are related to notion of momentum. Then, we
Externí odkaz:
http://arxiv.org/abs/2304.04172
Autor:
Levy, Kfir Y.
We consider distributed learning scenarios where M machines interact with a parameter server along several communication rounds in order to minimize a joint objective function. Focusing on the heterogeneous case, where different machines may draw sam
Externí odkaz:
http://arxiv.org/abs/2304.04169
Many compression techniques have been proposed to reduce the communication overhead of Federated Learning training procedures. However, these are typically designed for compressing model updates, which are expected to decay throughout training. As a
Externí odkaz:
http://arxiv.org/abs/2302.00543
Universal methods for optimization are designed to achieve theoretically optimal convergence rates without any prior knowledge of the problem's regularity parameters or the accurarcy of the gradient oracle employed by the optimizer. In this regard, e
Externí odkaz:
http://arxiv.org/abs/2206.09352
We study meta-learning for adversarial multi-armed bandits. We consider the online-within-online setup, in which a player (learner) encounters a sequence of multi-armed bandit episodes. The player's performance is measured as regret against the best
Externí odkaz:
http://arxiv.org/abs/2205.15921