Zobrazeno 1 - 10
of 22
pro vyhledávání: '"Viallard, Paul"'
We propose data-dependent uniform generalization bounds by approaching the problem from a PAC-Bayesian perspective. We first apply the PAC-Bayesian framework on `random sets' in a rigorous way, where the training algorithm is assumed to output a data
Externí odkaz:
http://arxiv.org/abs/2404.17442
In statistical learning theory, a generalization bound usually involves a complexity measure imposed by the considered theoretical framework. This limits the scope of such bounds, as other forms of capacity measures or regularizations are used in alg
Externí odkaz:
http://arxiv.org/abs/2402.13285
Modern machine learning usually involves predictors in the overparametrised setting (number of trained parameters greater than dataset size), and their training yield not only good performances on training data, but also good generalisation capacity.
Externí odkaz:
http://arxiv.org/abs/2402.08508
This paper contains a recipe for deriving new PAC-Bayes generalisation bounds based on the $(f, \Gamma)$-divergence, and, in addition, presents PAC-Bayes generalisation bounds where we interpolate between a series of probability divergences (includin
Externí odkaz:
http://arxiv.org/abs/2402.05101
Autor:
Dupuis, Benjamin, Viallard, Paul
Understanding the generalization abilities of modern machine learning algorithms has been a major research topic over the past decades. In recent years, the learning dynamics of Stochastic Gradient Descent (SGD) have been related to heavy-tailed dyna
Externí odkaz:
http://arxiv.org/abs/2312.00427
Minimising upper bounds on the population risk or the generalisation gap has been widely used in structural risk minimisation (SRM) -- this is in particular at the core of PAC-Bayesian learning. Despite its successes and unfailing surge of interest i
Externí odkaz:
http://arxiv.org/abs/2306.04375
Autor:
Zantedeschi, Valentina, Viallard, Paul, Morvant, Emilie, Emonet, Rémi, Habrard, Amaury, Germain, Pascal, Guedj, Benjamin
Publikováno v:
Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS 2021)
We investigate a stochastic counterpart of majority votes over finite ensembles of classifiers, and study its generalization properties. While our approach holds for arbitrary distributions, we instantiate it with Dirichlet distributions: this allows
Externí odkaz:
http://arxiv.org/abs/2106.12535
Publikováno v:
ECML PKDD 2021, Sep 2021, Bilbao, Spain
In the PAC-Bayesian literature, the C-Bound refers to an insightful relation between the risk of a majority vote classifier (under the zero-one loss) and the first two moments of its margin (i.e., the expected margin and the voters' diversity). Until
Externí odkaz:
http://arxiv.org/abs/2104.13626
Publikováno v:
NeurIPS 2021, Dec 2021, Sydney, Australia
We propose the first general PAC-Bayesian generalization bounds for adversarial robustness, that estimate, at test time, how much a model will be invariant to imperceptible perturbations in the input. Instead of deriving a worst-case analysis of the
Externí odkaz:
http://arxiv.org/abs/2102.11069
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability of randomized classifiers. However, they require a loose and costly derandomization step when applied to some families of deterministic models such as
Externí odkaz:
http://arxiv.org/abs/2102.08649