Zobrazeno 1 - 10
of 231
pro vyhledávání: '"Zaidi, Abdellatif"'
A client device which has access to $n$ training data samples needs to obtain a statistical hypothesis or model $W$ and then to send it to a remote server. The client and the server devices share some common randomness sequence as well as a prior on
Externí odkaz:
http://arxiv.org/abs/2406.08193
A major challenge in designing efficient statistical supervised learning algorithms is finding representations that perform well not only on available training samples but also on unseen data. While the study of representation learning has spurred mu
Externí odkaz:
http://arxiv.org/abs/2402.03254
Neural network compression has been an increasingly important subject, not only due to its practical relevance, but also due to its theoretical implications, as there is an explicit connection between compressibility and generalization error. Recent
Externí odkaz:
http://arxiv.org/abs/2306.08125
We investigate the generalization error of statistical learning models in a Federated Learning (FL) setting. Specifically, we study the evolution of the generalization error with the number of communication rounds $R$ between $K$ clients and a parame
Externí odkaz:
http://arxiv.org/abs/2306.05862
We study the generalization error of statistical learning models in a Federated Learning (FL) setting. Specifically, there are $K$ devices or clients, each holding an independent own dataset of size $n$. Individual models, learned locally via Stochas
Externí odkaz:
http://arxiv.org/abs/2304.12216
Autor:
Sefidgaran, Milad, Zaidi, Abdellatif
In this paper, we establish novel data-dependent upper bounds on the generalization error through the lens of a "variable-size compressibility" framework that we introduce newly here. In this framework, the generalization error of an algorithm is lin
Externí odkaz:
http://arxiv.org/abs/2303.05369
In this paper, we use tools from rate-distortion theory to establish new upper bounds on the generalization error of statistical distributed learning algorithms. Specifically, there are $K$ clients whose individually chosen models are aggregated by a
Externí odkaz:
http://arxiv.org/abs/2206.02604
Autor:
Zaidi, Abdellatif
We study a class of $K$-encoder hypothesis testing against conditional independence problems. Under the criterion that stipulates minimization of the Type II error subject to a (constant) upper bound $\epsilon$ on the Type I error, we characterize th
Externí odkaz:
http://arxiv.org/abs/2107.05538
Autor:
Moldoveanu, Matei, Zaidi, Abdellatif
It is widely perceived that leveraging the success of modern machine learning techniques to mobile devices and wireless networks has the potential of enabling important new services. This, however, poses significant challenges, essentially due to tha
Externí odkaz:
http://arxiv.org/abs/2107.03433
Autor:
Sarbu, Septimia, Zaidi, Abdellatif
We consider the problem of learning parametric distributions from their quantized samples in a network. Specifically, $n$ agents or sensors observe independent samples of an unknown parametric distribution; and each of them uses $k$ bits to describe
Externí odkaz:
http://arxiv.org/abs/2105.12019