Zobrazeno 1 - 10
of 6 937
pro vyhledávání: '"Massart, P."'
Kolmogorov-Smirnov (KS) tests rely on the convergence to zero of the KS-distance $d(F_n,G)$ in the one sample case, and of $d(F_n,G_m)$ in the two sample case. In each case the assumption (the null hypothesis) is that $F=G$, and so $d(F,G)=0$. In thi
Externí odkaz:
http://arxiv.org/abs/2409.18087
We study the task of online learning in the presence of Massart noise. Instead of assuming that the online adversary chooses an arbitrary sequence of labels, we assume that the context $\mathbf{x}$ is selected adversarially but the label $y$ presente
Externí odkaz:
http://arxiv.org/abs/2405.12958
We propose SGD-exp, a stochastic gradient descent approach for linear and ReLU regressions under Massart noise (adversarial semi-random corruption model) for the fully streaming setting. We show novel nearly linear convergence guarantees of SGD-exp t
Externí odkaz:
http://arxiv.org/abs/2403.01204
Autor:
Reeve, Henry W J
The Dvoretzky--Kiefer--Wolfowitz--Massart inequality gives a sub-Gaussian tail bound on the supremum norm distance between the empirical distribution function of a random sample and its population counterpart. We provide a short proof of a result tha
Externí odkaz:
http://arxiv.org/abs/2403.16651
We study the problem of PAC learning a single neuron in the presence of Massart noise. Specifically, for a known activation function $f: \mathbb{R} \to \mathbb{R}$, the learner is given access to labeled examples $(\mathbf{x}, y) \in \mathbb{R}^d \ti
Externí odkaz:
http://arxiv.org/abs/2210.09949
We study the complexity of PAC learning halfspaces in the presence of Massart noise. In this problem, we are given i.i.d. labeled examples $(\mathbf{x}, y) \in \mathbb{R}^N \times \{ \pm 1\}$, where the distribution of $\mathbf{x}$ is arbitrary and t
Externí odkaz:
http://arxiv.org/abs/2207.14266
Autor:
Nasser, Rajai, Tiegel, Stefan
We give tight statistical query (SQ) lower bounds for learnining halfspaces in the presence of Massart noise. In particular, suppose that all labels are corrupted with probability at most $\eta$. We show that for arbitrary $\eta \in [0,1/2]$ every SQ
Externí odkaz:
http://arxiv.org/abs/2201.09818
We study the problem of PAC learning halfspaces on $\mathbb{R}^d$ with Massart noise under the Gaussian distribution. In the Massart model, an adversary is allowed to flip the label of each point $\mathbf{x}$ with unknown probability $\eta(\mathbf{x}
Externí odkaz:
http://arxiv.org/abs/2108.08767
Autor:
Diakonikolas, Ilias, Impagliazzo, Russell, Kane, Daniel, Lei, Rex, Sorrell, Jessica, Tzamos, Christos
We study the problem of boosting the accuracy of a weak learner in the (distribution-independent) PAC model with Massart noise. In the Massart noise model, the label of each example $x$ is independently misclassified with probability $\eta(x) \leq \e
Externí odkaz:
http://arxiv.org/abs/2106.07779
We study the fundamental problem of ReLU regression, where the goal is to fit Rectified Linear Units (ReLUs) to data. This supervised learning task is efficiently solvable in the realizable setting, but is known to be computationally hard with advers
Externí odkaz:
http://arxiv.org/abs/2109.04623