Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Nanfack, Geraldin"'
Spurious correlations are a major source of errors for machine learning models, in particular when aiming for group-level fairness. It has been recently shown that a powerful approach to combat spurious correlations is to re-train the last layer on a
Externí odkaz:
http://arxiv.org/abs/2409.14637
Understanding the inner working functionality of large-scale deep neural networks is challenging yet crucial in several high-stakes applications. Mechanistic inter- pretability is an emergent field that tackles this challenge, often by identifying hu
Externí odkaz:
http://arxiv.org/abs/2406.01365
Autor:
Nanfack, Geraldin, Fulleringer, Alexander, Marty, Jonathan, Eickenberg, Michael, Belilovsky, Eugene
The internal functional behavior of trained Deep Neural Networks is notoriously difficult to interpret. Activation-maximization approaches are one set of techniques used to interpret and analyze trained deep-learning models. These consist in finding
Externí odkaz:
http://arxiv.org/abs/2306.07397
Autor:
Stassin, Sédrick, Englebert, Alexandre, Nanfack, Géraldin, Albert, Julien, Versbraegen, Nassim, Peiffer, Gilles, Doh, Miriam, Riche, Nicolas, Frenay, Benoît, De Vleeschouwer, Christophe
EXplainable Artificial Intelligence (XAI) aims to help users to grasp the reasoning behind the predictions of an Artificial Intelligence (AI) system. Many XAI approaches have emerged in recent years. Consequently, a subfield related to the evaluation
Externí odkaz:
http://arxiv.org/abs/2305.16361
Publikováno v:
In Pattern Recognition October 2023 142
The recent researches in Deep Convolutional Neural Network have focused their attention on improving accuracy that provide significant advances. However, if they were limited to classification tasks, nowadays with contributions from Scientific Commun
Externí odkaz:
http://arxiv.org/abs/1711.05491
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
Nanfack, G, Temple, P & Frénay, B 2021, Global Explanations with Decision Rules : a Co-learning Approach . in C de Campos & M H Maathuis (eds), Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence . Proceedings of Machine Learning Research, vol. 161, MLResearch Press, pp. 589-599 .
Black-box machine learning models can be extremely accurate. Yet, in critical applications such as in healthcare or justice, if models cannot be explained, domain experts will be reluctant to use them. A common way to explain a black-box model is to
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od______4291::4f9a6933fde3c1b8f45ac5223ac9c6fa
https://pure.unamur.be/ws/files/61249658/Global_Explanations_with_Decision_Rules_a_Co_learning_Approach_camera_ready.pdf
https://pure.unamur.be/ws/files/61249658/Global_Explanations_with_Decision_Rules_a_Co_learning_Approach_camera_ready.pdf
Publikováno v:
Proceedings of SPIE; 2018, Vol. 10696, p1-8, 8p
Autor:
Verikas, Antanas, Radeva, Petia, Nikolaev, Dmitry, Zhou, Jianhong, Nanfack, Geraldin, Elhassouny, Azeddine, Oulad Haj Thami, Rachid
Publikováno v:
Proceedings of SPIE; April 2018, Vol. 10696 Issue: 1 p106962O-106962O-8, 962667p