Zobrazeno 1 - 10
of 63
pro vyhledávání: '"Nalisnick, Eric"'
Autor:
Lehmann, Nils, Gawlikowski, Jakob, Stewart, Adam J., Jancauskas, Vytautas, Depeweg, Stefan, Nalisnick, Eric, Gottschling, Nina Maria
Uncertainty quantification (UQ) is an essential tool for applying deep neural networks (DNNs) to real world tasks, as it attaches a degree of confidence to DNN outputs. However, despite its benefits, UQ is often left out of the standard DNN workflow
Externí odkaz:
http://arxiv.org/abs/2410.03390
Subjective tasks in NLP have been mostly relegated to objective standards, where the gold label is decided by taking the majority vote. This obfuscates annotator disagreement and the inherent uncertainty of the label. We argue that subjectivity shoul
Externí odkaz:
http://arxiv.org/abs/2408.14141
Distribution shifts between training and test data are inevitable over the lifecycle of a deployed model, leading to performance decay. Adapting a model on test samples can help mitigate this drop in performance. However, most test-time adaptation me
Externí odkaz:
http://arxiv.org/abs/2407.12492
Autor:
Jazbec, Metod, Timans, Alexander, Veljković, Tin Hadži, Sakmann, Kaspar, Zhang, Dan, Naesseth, Christian A., Nalisnick, Eric
Scaling machine learning models significantly improves their performance. However, such gains come at the cost of inference being slow and resource-intensive. Early-exit neural networks (EENNs) offer a promising solution: they accelerate inference by
Externí odkaz:
http://arxiv.org/abs/2405.20915
Deep neural networks (DNNs) have been successfully applied to earth observation (EO) data and opened new research avenues. Despite the theoretical and practical advances of these techniques, DNNs are still considered black box tools and by default ar
Externí odkaz:
http://arxiv.org/abs/2404.08325
Quantifying a model's predictive uncertainty is essential for safety-critical applications such as autonomous driving. We consider quantifying such uncertainty for multi-object detection. In particular, we leverage conformal prediction to obtain unce
Externí odkaz:
http://arxiv.org/abs/2403.07263
The learning to defer (L2D) framework allows autonomous systems to be safe and robust by allocating difficult decisions to a human expert. All existing work on L2D assumes that each expert is well-identified, and if any expert were to change, the sys
Externí odkaz:
http://arxiv.org/abs/2403.02683
Autor:
Allingham, James Urquhart, Mlodozeniec, Bruno Kacper, Padhy, Shreyas, Antorán, Javier, Krueger, David, Turner, Richard E., Nalisnick, Eric, Hernández-Lobato, José Miguel
Correctly capturing the symmetry transformations of data can lead to efficient models with strong generalization capabilities, though methods incorporating symmetries often require prior knowledge. While recent advancements have been made in learning
Externí odkaz:
http://arxiv.org/abs/2403.01946
Autor:
Manduchi, Laura, Pandey, Kushagra, Bamler, Robert, Cotterell, Ryan, Däubener, Sina, Fellenz, Sophie, Fischer, Asja, Gärtner, Thomas, Kirchler, Matthias, Kloft, Marius, Li, Yingzhen, Lippert, Christoph, de Melo, Gerard, Nalisnick, Eric, Ommer, Björn, Ranganath, Rajesh, Rudolph, Maja, Ullrich, Karen, Broeck, Guy Van den, Vogt, Julia E, Wang, Yixin, Wenzel, Florian, Wood, Frank, Mandt, Stephan, Fortuin, Vincent
The field of deep generative modeling has grown rapidly and consistently over the years. With the availability of massive amounts of training data coupled with advances in scalable unsupervised learning paradigms, recent large-scale generative models
Externí odkaz:
http://arxiv.org/abs/2403.00025
Knowing if a model will generalize to data 'in the wild' is crucial for safe deployment. To this end, we study model disagreement notions that consider the full predictive distribution - specifically disagreement based on Hellinger distance, Jensen-S
Externí odkaz:
http://arxiv.org/abs/2312.08033