Zobrazeno 1 - 10
of 219
pro vyhledávání: '"Sharma, Pranay"'
We study a federated version of multi-objective optimization (MOO), where a single model is trained to optimize multiple objective functions. MOO has been extensively studied in the centralized setting but is less explored in federated or distributed
Externí odkaz:
http://arxiv.org/abs/2410.16398
Autor:
Armacki, Aleksandar, Yu, Shuhua, Sharma, Pranay, Joshi, Gauri, Bajovic, Dragana, Jakovetic, Dusan, Kar, Soummya
We study high-probability convergence in online learning, in the presence of heavy-tailed noise. To combat the heavy tails, a general framework of nonlinear SGD methods is considered, subsuming several popular nonlinearities like sign, quantization,
Externí odkaz:
http://arxiv.org/abs/2410.13954
In cross-device federated learning (FL) with millions of mobile clients, only a small subset of clients participate in training in every communication round, and Federated Averaging (FedAvg) is the most popular algorithm in practice. Existing analyse
Externí odkaz:
http://arxiv.org/abs/2410.01209
Federated Learning (FL) enables edge devices or clients to collaboratively train machine learning (ML) models without sharing their private data. Much of the existing work in FL focuses on efficiently learning a model for a single task. In this paper
Externí odkaz:
http://arxiv.org/abs/2406.00302
We study the problem of communication-efficient distributed vector mean estimation, a commonly used subroutine in distributed optimization and Federated Learning (FL). Rand-$k$ sparsification is a commonly used technique to reduce communication cost,
Externí odkaz:
http://arxiv.org/abs/2310.18868
Autor:
Armacki, Aleksandar, Sharma, Pranay, Joshi, Gauri, Bajovic, Dragana, Jakovetic, Dusan, Kar, Soummya
We study high-probability convergence guarantees of learning on streaming data in the presence of heavy-tailed noise. In the proposed scenario, the model is updated in an online fashion, as new information is observed, without storing any additional
Externí odkaz:
http://arxiv.org/abs/2310.18784
Stochastic approximation with multiple coupled sequences (MSA) has found broad applications in machine learning as it encompasses a rich class of problems including bilevel optimization (BLO), multi-level compositional optimization (MCO), and reinfor
Externí odkaz:
http://arxiv.org/abs/2306.01648
Autor:
Jia, Jinghan, Liu, Jiancheng, Ram, Parikshit, Yao, Yuguang, Liu, Gaowen, Liu, Yang, Sharma, Pranay, Liu, Sijia
In response to recent data regulation requirements, machine unlearning (MU) has emerged as a critical process to remove the influence of specific examples from a given model. Although exact unlearning can be achieved through complete model retraining
Externí odkaz:
http://arxiv.org/abs/2304.04934
Invariant risk minimization (IRM) has received increasing attention as a way to acquire environment-agnostic data representations and predictions, and as a principled solution for preventing spurious correlations from being learned and for improving
Externí odkaz:
http://arxiv.org/abs/2303.02343
Minimax optimization has seen a surge in interest with the advent of modern applications such as GANs, and it is inherently more challenging than simple minimization. The difficulty is exacerbated by the training data residing at multiple edge device
Externí odkaz:
http://arxiv.org/abs/2302.04249