Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Andéol, Léo"'
We propose a post-hoc, computationally lightweight method to quantify predictive uncertainty in semantic image segmentation. Our approach uses conformal prediction to generate statistically valid prediction sets that are guaranteed to include the gro
Externí odkaz:
http://arxiv.org/abs/2405.05145
Research on Out-Of-Distribution (OOD) detection focuses mainly on building scores that efficiently distinguish OOD data from In Distribution (ID) data. On the other hand, Conformal Prediction (CP) uses non-conformity scores to construct prediction se
Externí odkaz:
http://arxiv.org/abs/2403.11532
Attribution methods correspond to a class of explainability methods (XAI) that aim to assess how individual inputs contribute to a model's decision-making process. We have identified a significant limitation in one type of attribution methods, known
Externí odkaz:
http://arxiv.org/abs/2307.09591
Autor:
Fel, Thomas, Boutin, Victor, Moayeri, Mazda, Cadène, Rémi, Bethune, Louis, andéol, Léo, Chalvidal, Mathieu, Serre, Thomas
Publikováno v:
Conference on Neural Information Processing Systems (NeurIPS), 2023
In recent years, concept-based approaches have emerged as some of the most promising explainability methods to help us interpret the decisions of Artificial Neural Networks (ANNs). These methods seek to discover intelligible visual 'concepts' buried
Externí odkaz:
http://arxiv.org/abs/2306.07304
Deploying deep learning models in real-world certified systems requires the ability to provide confidence estimates that accurately reflect their uncertainty. In this paper, we demonstrate the use of the conformal prediction framework to construct re
Externí odkaz:
http://arxiv.org/abs/2304.06052
Autor:
Waida, Hiroki, Wada, Yuichiro, Andéol, Léo, Nakagawa, Takumi, Zhang, Yuhui, Kanamori, Takafumi
Contrastive learning is an efficient approach to self-supervised representation learning. Although recent studies have made progress in the theoretical understanding of contrastive learning, the investigation of how to characterize the clusters of th
Externí odkaz:
http://arxiv.org/abs/2304.00395
We present an application of conformal prediction, a form of uncertainty quantification with guarantees, to the detection of railway signals. State-of-the-art architectures are tested and the most promising one undergoes the process of conformalizati
Externí odkaz:
http://arxiv.org/abs/2301.11136
Autor:
Andeol, Léo, Kawakami, Yusei, Wada, Yuichiro, Kanamori, Takafumi, Müller, Klaus-Robert, Montavon, Grégoire
Domain shifts in the training data are common in practical applications of machine learning; they occur for instance when the data is coming from different sources. Ideally, a ML model should work well independently of these shifts, for example, by l
Externí odkaz:
http://arxiv.org/abs/2106.04923
Autor:
Andéol, Léo, Kawakami, Yusei, Wada, Yuichiro, Kanamori, Takafumi, Müller, Klaus-Robert, Montavon, Grégoire
Publikováno v:
In Neural Networks October 2023 167:233-243
Publikováno v:
AI and Ethics; February 2024, Vol. 4 Issue: 1 p157-161, 5p