Zobrazeno 1 - 10
of 88
pro vyhledávání: '"Fortuin, Vincent"'
State-of-the-art computer vision tasks, like monocular depth estimation (MDE), rely heavily on large, modern Transformer-based architectures. However, their application in safety-critical domains demands reliable predictive performance and uncertaint
Externí odkaz:
http://arxiv.org/abs/2409.17085
Knowing which features of a multivariate time series to measure and when is a key task in medicine, wearables, and robotics. Better acquisition policies can reduce costs while maintaining or even improving the performance of downstream predictors. In
Externí odkaz:
http://arxiv.org/abs/2407.13429
Laplace approximations are popular techniques for endowing deep networks with epistemic uncertainty estimates as they can be applied without altering the predictions of the neural network, and they scale to large models and datasets. While the choice
Externí odkaz:
http://arxiv.org/abs/2407.13711
Autor:
Kristiadi, Agustinus, Strieth-Kalthoff, Felix, Subramanian, Sriram Ganapathi, Fortuin, Vincent, Poupart, Pascal, Pleiss, Geoff
Bayesian optimization (BO) is an integral part of automated scientific discovery -- the so-called self-driving lab -- where human inputs are ideally minimal or at least non-blocking. However, scientists often have strong intuition, and thus human fee
Externí odkaz:
http://arxiv.org/abs/2406.06459
Fine-tuned Large Language Models (LLMs) often suffer from overconfidence and poor calibration, particularly when fine-tuned on small datasets. To address these challenges, we propose a simple combination of Low-Rank Adaptation (LoRA) with Gaussian St
Externí odkaz:
http://arxiv.org/abs/2405.03425
Autor:
Manduchi, Laura, Pandey, Kushagra, Bamler, Robert, Cotterell, Ryan, Däubener, Sina, Fellenz, Sophie, Fischer, Asja, Gärtner, Thomas, Kirchler, Matthias, Kloft, Marius, Li, Yingzhen, Lippert, Christoph, de Melo, Gerard, Nalisnick, Eric, Ommer, Björn, Ranganath, Rajesh, Rudolph, Maja, Ullrich, Karen, Broeck, Guy Van den, Vogt, Julia E, Wang, Yixin, Wenzel, Florian, Wood, Frank, Mandt, Stephan, Fortuin, Vincent
The field of deep generative modeling has grown rapidly and consistently over the years. With the availability of massive amounts of training data coupled with advances in scalable unsupervised learning paradigms, recent large-scale generative models
Externí odkaz:
http://arxiv.org/abs/2403.00025
Neural network sparsification is a promising avenue to save computational time and memory costs, especially in an age where many successful AI models are becoming too large to na\"ively deploy on consumer hardware. While much work has focused on diff
Externí odkaz:
http://arxiv.org/abs/2402.15978
Autor:
Papamarkou, Theodore, Skoularidou, Maria, Palla, Konstantina, Aitchison, Laurence, Arbel, Julyan, Dunson, David, Filippone, Maurizio, Fortuin, Vincent, Hennig, Philipp, Hernández-Lobato, José Miguel, Hubin, Aliaksandr, Immer, Alexander, Karaletsos, Theofanis, Khan, Mohammad Emtiyaz, Kristiadi, Agustinus, Li, Yingzhen, Mandt, Stephan, Nemeth, Christopher, Osborne, Michael A., Rudner, Tim G. J., Rügamer, David, Teh, Yee Whye, Welling, Max, Wilson, Andrew Gordon, Zhang, Ruqi
In the current landscape of deep learning research, there is a predominant emphasis on achieving high predictive accuracy in supervised tasks involving large image and language datasets. However, a broader perspective reveals a multitude of overlooke
Externí odkaz:
http://arxiv.org/abs/2402.00809
Graph contrastive learning has shown great promise when labeled data is scarce, but large unlabeled datasets are available. However, it often does not take uncertainty estimation into account. We show that a variational Bayesian neural network approa
Externí odkaz:
http://arxiv.org/abs/2312.00232
An important yet underexplored question in the PAC-Bayes literature is how much tightness we lose by restricting the posterior family to factorized Gaussian distributions when optimizing a PAC-Bayes bound. We investigate this issue by estimating data
Externí odkaz:
http://arxiv.org/abs/2310.20053