Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Roy, Hrittik"'
Bayesian deep learning all too often underfits so that the Bayesian prediction is less accurate than a simple point estimate. Uncertainty quantification then comes at the cost of accuracy. For linearized models, the null space of the generalized Gaus
Externí odkaz:
http://arxiv.org/abs/2410.16901
Autor:
Roy, Hrittik, Miani, Marco, Ek, Carl Henrik, Hennig, Philipp, Pförtner, Marvin, Tatzel, Lukas, Hauberg, Søren
Current approximate posteriors in Bayesian neural networks (BNNs) exhibit a crucial limitation: they fail to maintain invariance under reparameterization, i.e. BNNs assign different posterior densities to different parametrizations of identical funct
Externí odkaz:
http://arxiv.org/abs/2406.03334
Tuning scientific and probabilistic machine learning models $-$ for example, partial differential equations, Gaussian processes, or Bayesian neural networks $-$ often relies on evaluating functions of matrices whose size grows with the data set or th
Externí odkaz:
http://arxiv.org/abs/2405.17277
One of the main challenges in modern deep learning is to understand why such over-parameterized models perform so well when trained on finite data. A way to analyze this generalization concept is through the properties of the associated loss landscap
Externí odkaz:
http://arxiv.org/abs/2307.04719