Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Galil, Ido"'
Autor:
Bercovich, Akhiad, Ronen, Tomer, Abramovich, Talor, Ailon, Nir, Assaf, Nave, Dabbah, Mohammad, Galil, Ido, Geifman, Amnon, Geifman, Yonatan, Golan, Izhak, Haber, Netanel, Karpas, Ehud, Levy, Itay, Mor, Shahar, Moshe, Zach, Nabwani, Najeeb, Puny, Omri, Rubin, Ran, Schen, Itamar, Shahaf, Ido, Tropp, Oren, Argov, Omer Ullman, Zilberstein, Ran, El-Yaniv, Ran
Large language models (LLMs) have demonstrated remarkable capabilities, but their adoption is limited by high computational costs during inference. While increasing parameter counts enhances accuracy, it also widens the gap between state-of-the-art c
Externí odkaz:
http://arxiv.org/abs/2411.19146
Deploying deep neural networks for risk-sensitive tasks necessitates an uncertainty estimation mechanism. This paper introduces hierarchical selective classification, extending selective classification to a hierarchical setting. Our approach leverage
Externí odkaz:
http://arxiv.org/abs/2405.11533
Publikováno v:
International Conference on Learning Representations (2023)
When deployed for risk-sensitive tasks, deep neural networks must be able to detect instances with labels from outside the distribution for which they were trained. In this paper we present a novel framework to benchmark the ability of image classifi
Externí odkaz:
http://arxiv.org/abs/2302.11893
Publikováno v:
International Conference on Learning Representations (2023)
When deployed for risk-sensitive tasks, deep neural networks must include an uncertainty estimation mechanism. Here we examine the relationship between deep architectures and their respective training regimes, with their corresponding selective predi
Externí odkaz:
http://arxiv.org/abs/2302.11874
Due to the comprehensive nature of this paper, it has been updated and split into two separate papers: "A Framework For Benchmarking Class-out-of-distribution Detection And Its Application To ImageNet" and "What Can We Learn From The Selective Predic
Externí odkaz:
http://arxiv.org/abs/2206.02152
Autor:
Galil, Ido, El-Yaniv, Ran
Publikováno v:
Neural Information Processing Systems Conference (2021)
Deep neural networks (DNNs) have proven to be powerful predictors and are widely used for various tasks. Credible uncertainty estimation of their predictions, however, is crucial for their deployment in many risk-sensitive applications. In this paper
Externí odkaz:
http://arxiv.org/abs/2110.13741