Zobrazeno 1 - 10
of 394
pro vyhledávání: '"Feder, Meir"'
Autor:
Vituri, Shlomi, Feder, Meir
In this paper we consider the problem of universal {\em batch} learning in a misspecification setting with log-loss. In this setting the hypothesis class is a set of models $\Theta$. However, the data is generated by an unknown distribution that may
Externí odkaz:
http://arxiv.org/abs/2405.07252
Autor:
Hendel, Adi, Feder, Meir
Statistical learning theory and the Probably Approximately Correct (PAC) criterion are the common approach to mathematical learning theory. PAC is widely used to analyze learning problems and algorithms, and have been studied thoroughly. Uniform wors
Externí odkaz:
http://arxiv.org/abs/2405.00792
Learning algorithms that divide the data into batches are prevalent in many machine-learning applications, typically offering useful trade-offs between computational efficiency and performance. In this paper, we examine the benefits of batch-partitio
Externí odkaz:
http://arxiv.org/abs/2306.08432
Autor:
Bibas, Koby, Feder, Meir
In supervised batch learning, the predictive normalized maximum likelihood (pNML) has been proposed as the min-max regret solution for the distribution-free setting, where no distributional assumptions are made on the data. However, the pNML is not d
Externí odkaz:
http://arxiv.org/abs/2206.08757
Detecting out-of-distribution (OOD) samples is vital for developing machine learning based models for critical safety systems. Common approaches for OOD detection assume access to some OOD samples during training which may not be available in a real-
Externí odkaz:
http://arxiv.org/abs/2110.09246
Adversarial attacks have been shown to be highly effective at degrading the performance of deep neural networks (DNNs). The most prominent defense is adversarial training, a method for learning a robust model. Nevertheless, adversarial training does
Externí odkaz:
http://arxiv.org/abs/2109.01945
Autor:
Bibas, Koby, Feder, Meir
A fundamental principle of learning theory is that there is a trade-off between the complexity of a prediction rule and its ability to generalize. Modern machine learning models do not obey this paradigm: They produce an accurate prediction even with
Externí odkaz:
http://arxiv.org/abs/2102.07181
Autor:
Feder, Meir, Polyanskiy, Yury
We consider the question of sequential prediction under the log-loss in terms of cumulative regret. Namely, given a hypothesis class of distributions, learner sequentially predicts the (distribution of the) next letter in sequence and its performance
Externí odkaz:
http://arxiv.org/abs/2102.00050
The predictive normalized maximum likelihood (pNML) approach has recently been proposed as the min-max optimal solution to the batch learning problem where both the training set and the test data feature are individuals, known sequences. This approac
Externí odkaz:
http://arxiv.org/abs/2011.10334
The notion of implicit bias, or implicit regularization, has been suggested as a means to explain the surprising generalization ability of modern-days overparameterized learning algorithms. This notion refers to the tendency of the optimization algor
Externí odkaz:
http://arxiv.org/abs/2003.06152