Zobrazeno 1 - 10
of 19 294
pro vyhledávání: '"optimal rates"'
Machine learning algorithms in high-dimensional settings are highly susceptible to the influence of even a small fraction of structured outliers, making robust optimization techniques essential. In particular, within the $\epsilon$-contamination mode
Externí odkaz:
http://arxiv.org/abs/2412.11003
Federated Learning (FL) has gained significant recent attention in machine learning for its enhanced privacy and data security, making it indispensable in fields such as healthcare, finance, and personalized services. This paper investigates federate
Externí odkaz:
http://arxiv.org/abs/2411.15660
We study convex optimization problems under differential privacy (DP). With heavy-tailed gradients, existing works achieve suboptimal rates. The main obstacle is that existing gradient estimators have suboptimal tail properties, resulting in a superf
Externí odkaz:
http://arxiv.org/abs/2408.09891
In this paper, we provide novel optimal (or near optimal) convergence rates in expectation for the last iterate of a clipped version of the stochastic subgradient method. We consider nonsmooth convex problems, over possibly unbounded domains, under h
Externí odkaz:
http://arxiv.org/abs/2410.00573
In this paper, we consider the problem of seriation of a permuted structured matrix based on noisy observations. The entries of the matrix relate to an expected quantification of interaction between two objects: the higher the value, the closer the o
Externí odkaz:
http://arxiv.org/abs/2408.10004
Autor:
Berger, Max, Holzmann, Hajo
We obtain minimax-optimal convergence rates in the supremum norm, including infor-mation-theoretic lower bounds, for estimating the covariance kernel of a stochastic process which is repeatedly observed at discrete, synchronous design points. In part
Externí odkaz:
http://arxiv.org/abs/2407.13641
Zero-error coding encompasses a variety of source and channel problems where the probability of error must be exactly zero. This condition is stricter than that of the vanishing error regime, where the error probability goes to zero as the code block
Externí odkaz:
http://arxiv.org/abs/2407.02281
Functional linear regression is one of the fundamental and well-studied methods in functional data analysis. In this work, we investigate the functional linear regression model within the context of reproducing kernel Hilbert space by employing gener
Externí odkaz:
http://arxiv.org/abs/2406.10005
Federated learning has attracted significant recent attention due to its applicability across a wide range of settings where data is collected and analyzed across disparate locations. In this paper, we study federated nonparametric goodness-of-fit te
Externí odkaz:
http://arxiv.org/abs/2406.06749
In this paper we revisit the DP stochastic convex optimization (SCO) problem. For convex smooth losses, it is well-known that the canonical DP-SGD (stochastic gradient descent) achieves the optimal rate of $O\left(\frac{LR}{\sqrt{n}} + \frac{LR \sqrt
Externí odkaz:
http://arxiv.org/abs/2406.02716