Zobrazeno 1 - 10
of 173
pro vyhledávání: '"Guedj, Benjamin"'
Uncertainty quantification is critical for ensuring adequate predictive power of computational models used in biology. Focusing on two anaerobic digestion models, this article introduces a novel generalized Bayesian procedure, called VarBUQ, ensuring
Externí odkaz:
http://arxiv.org/abs/2405.19824
Sequential Bayesian Filtering aims to estimate the current state distribution of a Hidden Markov Model, given the past observations. The problem is well-known to be intractable for most application domains, except in notable cases such as the tabular
Externí odkaz:
http://arxiv.org/abs/2402.09796
Modern machine learning usually involves predictors in the overparametrised setting (number of trained parameters greater than dataset size), and their training yield not only good performances on training data, but also good generalisation capacity.
Externí odkaz:
http://arxiv.org/abs/2402.08508
This paper contains a recipe for deriving new PAC-Bayes generalisation bounds based on the $(f, \Gamma)$-divergence, and, in addition, presents PAC-Bayes generalisation bounds where we interpolate between a series of probability divergences (includin
Externí odkaz:
http://arxiv.org/abs/2402.05101
Autor:
Clerico, Eugenio, Guedj, Benjamin
We establish explicit dynamics for neural networks whose training objective has a regularising term that constrains the parameters to remain close to their initial value. This keeps the network in a lazy training regime, where the dynamics can be lin
Externí odkaz:
http://arxiv.org/abs/2312.13259
We introduce a novel strategy to train randomised predictors in federated learning, where each node of the network aims at preserving its privacy by releasing a local predictor but keeping secret its training dataset with respect to the other nodes.
Externí odkaz:
http://arxiv.org/abs/2310.11203
Autor:
Hellström, Fredrik, Guedj, Benjamin
We derive generic information-theoretic and PAC-Bayesian generalization bounds involving an arbitrary convex comparator function, which measures the discrepancy between the training and population loss. The bounds hold under the assumption that the c
Externí odkaz:
http://arxiv.org/abs/2310.10534
A fundamental question in theoretical machine learning is generalization. Over the past decades, the PAC-Bayesian approach has been established as a flexible framework to address the generalization capabilities of machine learning algorithms, and des
Externí odkaz:
http://arxiv.org/abs/2309.04381
Minimising upper bounds on the population risk or the generalisation gap has been widely used in structural risk minimisation (SRM) -- this is in particular at the core of PAC-Bayesian learning. Despite its successes and unfailing surge of interest i
Externí odkaz:
http://arxiv.org/abs/2306.04375
Autor:
Haddouche, Maxime, Guedj, Benjamin
PAC-Bayes learning is an established framework to both assess the generalisation ability of learning algorithms, and design new learning algorithm by exploiting generalisation bounds as training objectives. Most of the exisiting bounds involve a \emp
Externí odkaz:
http://arxiv.org/abs/2304.07048