Zobrazeno 1 - 10
of 149
pro vyhledávání: '"Hellström, Fredrik"'
The safe integration of machine learning modules in decision-making processes hinges on their ability to quantify uncertainty. A popular technique to achieve this goal is conformal prediction (CP), which transforms an arbitrary base predictor into a
Externí odkaz:
http://arxiv.org/abs/2401.11810
Autor:
Hellström, Fredrik, Guedj, Benjamin
We derive generic information-theoretic and PAC-Bayesian generalization bounds involving an arbitrary convex comparator function, which measures the discrepancy between the training and population loss. The bounds hold under the assumption that the c
Externí odkaz:
http://arxiv.org/abs/2310.10534
A fundamental question in theoretical machine learning is generalization. Over the past decades, the PAC-Bayesian approach has been established as a flexible framework to address the generalization capabilities of machine learning algorithms, and des
Externí odkaz:
http://arxiv.org/abs/2309.04381
Publikováno v:
NeurIPS 2023
We consider online prediction of a binary sequence with expert advice. For this setting, we devise label-efficient forecasting algorithms, which use a selective sampling scheme that enables collecting much fewer labels than standard procedures, while
Externí odkaz:
http://arxiv.org/abs/2302.08397
Autor:
Hellström, Fredrik, Durisi, Giuseppe
Publikováno v:
Advances in Neural Information Processing Systems, volume 35, pages 20648-20660, 2022
Recent work has established that the conditional mutual information (CMI) framework of Steinke and Zakynthinou (2020) is expressive enough to capture generalization guarantees in terms of algorithmic stability, VC dimension, and related complexity me
Externí odkaz:
http://arxiv.org/abs/2210.06511
Autor:
Hellström, Fredrik, Durisi, Giuseppe
Publikováno v:
Advances in Neural Information Processing Systems, volume 35, pages 10108-10121, 2022
We present a new family of information-theoretic generalization bounds, in which the training loss and the population loss are compared through a jointly convex function. This function is upper-bounded in terms of the disintegrated, samplewise, evalu
Externí odkaz:
http://arxiv.org/abs/2210.06422
Autor:
Hellström, Fredrik, Durisi, Giuseppe
We present a framework to derive bounds on the test loss of randomized learning algorithms for the case of bounded loss functions. Drawing from Steinke & Zakynthinou (2020), this framework leads to bounds that depend on the conditional information de
Externí odkaz:
http://arxiv.org/abs/2010.11552
Autor:
Hellström, Fredrik, Durisi, Giuseppe
Publikováno v:
IEEE J. Sel. Areas Inf. Theory 1.3 (2020) 824-839
We present a general approach, based on exponential inequalities, to derive bounds on the generalization error of randomized learning algorithms. Using this approach, we provide bounds on the average generalization error as well as bounds on its tail
Externí odkaz:
http://arxiv.org/abs/2005.08044
Autor:
Hellström, Fredrik, Durisi, Giuseppe
We present a general approach to deriving bounds on the generalization error of randomized learning algorithms. Our approach can be used to obtain bounds on the average generalization error as well as bounds on its tail probabilities, both for the ca
Externí odkaz:
http://arxiv.org/abs/2004.09148
Autor:
Catena, Riccardo, Hellström, Fredrik
We study the capture and subsequent annihilation of inelastic dark matter (DM) in the Sun, placing constraints on the DM-nucleon scattering cross section from the null result of the IceCube neutrino telescope. We then compare such constraints with ex
Externí odkaz:
http://arxiv.org/abs/1808.08082