Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Bitterwolf, Julian"'
Out-of-distribution (OOD) detection is the problem of identifying inputs which are unrelated to the in-distribution task. The OOD detection performance when the in-distribution (ID) is ImageNet-1K is commonly being tested on a small range of test OOD
Externí odkaz:
http://arxiv.org/abs/2306.00826
It is an important problem in trustworthy machine learning to recognize out-of-distribution (OOD) inputs which are inputs unrelated to the in-distribution task. Many out-of-distribution detection methods have been suggested in recent years. The goal
Externí odkaz:
http://arxiv.org/abs/2206.09880
The application of machine learning in safety-critical systems requires a reliable assessment of uncertainty. However, deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data. Even if trained to be
Externí odkaz:
http://arxiv.org/abs/2106.04260
Publikováno v:
Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
Deep neural networks are known to be overconfident when applied to out-of-distribution (OOD) inputs which clearly do not belong to any class. This is a problem in safety-critical applications since a reliable assessment of the uncertainty of a classi
Externí odkaz:
http://arxiv.org/abs/2007.08473
Autor:
Rusak, Evgenia, Schott, Lukas, Zimmermann, Roland S., Bitterwolf, Julian, Bringmann, Oliver, Bethge, Matthias, Brendel, Wieland
The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow. In contrast, the performance of modern image recognition models strongly degrades when evaluated on previously unse
Externí odkaz:
http://arxiv.org/abs/2001.06057
Classifiers used in the wild, in particular for safety-critical systems, should not only have good generalization properties but also should know when they don't know, in particular make low confidence predictions far away from the training data. We
Externí odkaz:
http://arxiv.org/abs/1812.05720
Publikováno v:
Proceedings of the International Symposium on Combinatorial Search. 15:223-228
Neural networks (NN) are increasingly investigated in AI Planning, and are used successfully to learn heuristic functions. NNs commonly not only predict a value, but also output a confidence in this prediction. From the perspective of heuristic searc