Zobrazeno 1 - 10
of 34
pro vyhledávání: '"Chihani Zakaria"'
Autor:
Cancila Daniela, Daniel Geoffrey, Sirven Jean-Baptiste, Chihani Zakaria, Chersi Fabian, Vinciguerra Regis
Publikováno v:
EPJ Web of Conferences, Vol 302, p 17005 (2024)
The development of applications and systems for the nuclear domain involves the interplay of many different disciplines and is, therefore, particularly complex. Additionally, these systems and their innovations have to be compliant with strict intern
Externí odkaz:
https://doaj.org/article/a1d16af1f807430fa0a97c69f050cd2d
Adversarial training is arguably the most popular way to provide empirical robustness against specific adversarial examples. While variants based on multi-step attacks incur significant computational overhead, single-step variants are vulnerable to a
Externí odkaz:
http://arxiv.org/abs/2410.01617
Publikováno v:
xAI 2024 - The 2nd World Conference on eXplainable Artificial Intelligence, Jul 2024, La valette, Malta. pp.TBD
In the field of explainable AI, a vibrant effort is dedicated to the design of self-explainable models, as a more principled alternative to post-hoc methods that attempt to explain the decisions after a model opaquely makes them. However, this produc
Externí odkaz:
http://arxiv.org/abs/2409.16693
Autor:
Xu-Darme, Romain, Benois-Pineau, Jenny, Giot, Romain, Quénot, Georges, Chihani, Zakaria, Rousset, Marie-Christine, Zhukov, Alexey
In the field of Explainable AI, multiples evaluation metrics have been proposed in order to assess the quality of explanation methods w.r.t. a set of desired properties. In this work, we study the articulation between the stability, correctness and p
Externí odkaz:
http://arxiv.org/abs/2311.12860
Autor:
Xu-Darme, Romain, Girard-Satabin, Julien, Hond, Darryl, Incorvaia, Gabriele, Chihani, Zakaria
Publikováno v:
Computer Safety, Reliability, and Security. SAFECOMP 2023 Workshops, Sep 2023, Toulouse, France
In this work, we propose CODE, an extension of existing work from the field of explainable AI that identifies class-specific recurring patterns to build a robust Out-of-Distribution (OoD) detection method for visual classifiers. CODE does not require
Externí odkaz:
http://arxiv.org/abs/2311.12855
Autor:
Xu-Darme, Romain, Girard-Satabin, Julien, Hond, Darryl, Incorvaia, Gabriele, Chihani, Zakaria
Out-of-distribution (OoD) detection for data-based programs is a goal of paramount importance. Common approaches in the literature tend to train detectors requiring inside-of-distribution (in-distribution, or IoD) and OoD validation samples, and/or i
Externí odkaz:
http://arxiv.org/abs/2302.10303
In this work, we perform an in-depth analysis of the visualisation methods implemented in two popular self-explaining models for visual classification based on prototypes - ProtoPNet and ProtoTree. Using two fine-grained datasets (CUB-200-2011 and St
Externí odkaz:
http://arxiv.org/abs/2302.08508
In this paper, we present PARTICUL, a novel algorithm for unsupervised learning of part detectors from datasets used in fine-grained recognition. It exploits the macro-similarities of all images in the training set in order to mine for recurring patt
Externí odkaz:
http://arxiv.org/abs/2206.13304
Autor:
Girard-Satabin, Julien, Alberti, Michele, Bobot, François, Chihani, Zakaria, Lemesle, Augustin
Publikováno v:
AISafety, Jul 2022, Vienne, Austria
We present CAISAR, an open-source platform under active development for the characterization of AI systems' robustness and safety. CAISAR provides a unified entry point for defining verification problems by using WhyML, the mature and expressive lang
Externí odkaz:
http://arxiv.org/abs/2206.03044
Autor:
Girard-Satabin, Julien, Varasse, Aymeric, Schoenauer, Marc, Charpiat, Guillaume, Chihani, Zakaria
The impressive results of modern neural networks partly come from their non linear behaviour. Unfortunately, this property makes it very difficult to apply formal verification tools, even if we restrict ourselves to networks with a piecewise linear s
Externí odkaz:
http://arxiv.org/abs/2105.07776