Zobrazeno 1 - 10
of 33
pro vyhledávání: '"Cadene, Remi"'
Autor:
Fel, Thomas, Boutin, Victor, Moayeri, Mazda, Cadène, Rémi, Bethune, Louis, andéol, Léo, Chalvidal, Mathieu, Serre, Thomas
Publikováno v:
Conference on Neural Information Processing Systems (NeurIPS), 2023
In recent years, concept-based approaches have emerged as some of the most promising explainability methods to help us interpret the decisions of Artificial Neural Networks (ANNs). These methods seek to discover intelligible visual 'concepts' buried
Externí odkaz:
http://arxiv.org/abs/2306.07304
Autor:
Fel, Thomas, Boissin, Thibaut, Boutin, Victor, Picard, Agustin, Novello, Paul, Colin, Julien, Linsley, Drew, Rousseau, Tom, Cadène, Rémi, Gardes, Laurent, Serre, Thomas
Publikováno v:
Conference on Neural Information Processing Systems (NeurIPS), 2023
Feature visualization has gained substantial popularity, particularly after the influential work by Olah et al. in 2017, which established it as a crucial tool for explainability. However, its widespread adoption has been limited due to a reliance on
Externí odkaz:
http://arxiv.org/abs/2306.06805
Autor:
Fel, Thomas, Picard, Agustin, Bethune, Louis, Boissin, Thibaut, Vigouroux, David, Colin, Julien, Cadène, Rémi, Serre, Thomas
Publikováno v:
Proceedings of the IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023
Attribution methods, which employ heatmaps to identify the most influential regions of an image that impact model decisions, have gained widespread popularity as a type of explainability method. However, recent research has exposed the limited practi
Externí odkaz:
http://arxiv.org/abs/2211.10154
Autor:
Fel, Thomas, Hervier, Lucas, Vigouroux, David, Poche, Antonin, Plakoo, Justin, Cadene, Remi, Chalvidal, Mathieu, Colin, Julien, Boissin, Thibaut, Bethune, Louis, Picard, Agustin, Nicodeme, Claire, Gardes, Laurent, Flandin, Gregory, Serre, Thomas
Today's most advanced machine-learning models are hardly scrutable. The key challenge for explainability methods is to help assisting researchers in opening up these black boxes, by revealing the strategy that led to a given decision, by characterizi
Externí odkaz:
http://arxiv.org/abs/2206.04394
Autor:
Fel, Thomas, Ducoffe, Melanie, Vigouroux, David, Cadene, Remi, Capelle, Mikael, Nicodeme, Claire, Serre, Thomas
Publikováno v:
Proceedings of the IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023
A variety of methods have been proposed to try to explain how deep neural networks make their decisions. Key to those approaches is the need to sample the pixel space efficiently in order to derive importance maps. However, it has been shown that the
Externí odkaz:
http://arxiv.org/abs/2202.07728
A multitude of explainability methods and associated fidelity performance metrics have been proposed to help better understand how modern AI systems make decisions. However, much of the current work has remained theoretical -- without much considerat
Externí odkaz:
http://arxiv.org/abs/2112.04417
Autor:
Fel, Thomas, Cadene, Remi, Chalvidal, Mathieu, Cord, Matthieu, Vigouroux, David, Serre, Thomas
Publikováno v:
Conference on Neural Information Processing Systems (NeurIPS), Dec 2022, Sydney, Australia
We describe a novel attribution method which is grounded in Sensitivity Analysis and uses Sobol indices. Beyond modeling the individual contributions of image regions, Sobol indices provide an efficient way to capture higher-order interactions betwee
Externí odkaz:
http://arxiv.org/abs/2111.04138
Autor:
Vaishnav, Mohit, Cadene, Remi, Alamia, Andrea, Linsley, Drew, VanRullen, Rufin, Serre, Thomas
Publikováno v:
Neural Computation, 2022
Visual understanding requires comprehending complex visual relations between objects within a scene. Here, we seek to characterize the computational demands for abstract visual reasoning. We do this by systematically assessing the ability of modern d
Externí odkaz:
http://arxiv.org/abs/2108.03603
We introduce an evaluation methodology for visual question answering (VQA) to better diagnose cases of shortcut learning. These cases happen when a model exploits spurious statistical regularities to produce correct answers but does not actually depl
Externí odkaz:
http://arxiv.org/abs/2104.03149
Publikováno v:
2022 CVF Winter Conference on Applications of Computer Vision (WACV), Jan 2022, Hawaii, United States
A plethora of methods have been proposed to explain how deep neural networks reach their decisions but comparatively, little effort has been made to ensure that the explanations produced by these methods are objectively relevant. While several desira
Externí odkaz:
http://arxiv.org/abs/2009.04521