Zobrazeno 1 - 10
of 35
pro vyhledávání: '"CHIHANI, Zakaria"'
Adversarial training is arguably the most popular way to provide empirical robustness against specific adversarial examples. While variants based on multi-step attacks incur significant computational overhead, single-step variants are vulnerable to a
Externí odkaz:
http://arxiv.org/abs/2410.01617
Publikováno v:
xAI 2024 - The 2nd World Conference on eXplainable Artificial Intelligence, Jul 2024, La valette, Malta. pp.TBD
In the field of explainable AI, a vibrant effort is dedicated to the design of self-explainable models, as a more principled alternative to post-hoc methods that attempt to explain the decisions after a model opaquely makes them. However, this produc
Externí odkaz:
http://arxiv.org/abs/2409.16693
Autor:
Xu-Darme, Romain, Benois-Pineau, Jenny, Giot, Romain, Quénot, Georges, Chihani, Zakaria, Rousset, Marie-Christine, Zhukov, Alexey
In the field of Explainable AI, multiples evaluation metrics have been proposed in order to assess the quality of explanation methods w.r.t. a set of desired properties. In this work, we study the articulation between the stability, correctness and p
Externí odkaz:
http://arxiv.org/abs/2311.12860
Autor:
Xu-Darme, Romain, Girard-Satabin, Julien, Hond, Darryl, Incorvaia, Gabriele, Chihani, Zakaria
Publikováno v:
Computer Safety, Reliability, and Security. SAFECOMP 2023 Workshops, Sep 2023, Toulouse, France
In this work, we propose CODE, an extension of existing work from the field of explainable AI that identifies class-specific recurring patterns to build a robust Out-of-Distribution (OoD) detection method for visual classifiers. CODE does not require
Externí odkaz:
http://arxiv.org/abs/2311.12855
Autor:
Xu-Darme, Romain, Girard-Satabin, Julien, Hond, Darryl, Incorvaia, Gabriele, Chihani, Zakaria
Out-of-distribution (OoD) detection for data-based programs is a goal of paramount importance. Common approaches in the literature tend to train detectors requiring inside-of-distribution (in-distribution, or IoD) and OoD validation samples, and/or i
Externí odkaz:
http://arxiv.org/abs/2302.10303
In this work, we perform an in-depth analysis of the visualisation methods implemented in two popular self-explaining models for visual classification based on prototypes - ProtoPNet and ProtoTree. Using two fine-grained datasets (CUB-200-2011 and St
Externí odkaz:
http://arxiv.org/abs/2302.08508
In this paper, we present PARTICUL, a novel algorithm for unsupervised learning of part detectors from datasets used in fine-grained recognition. It exploits the macro-similarities of all images in the training set in order to mine for recurring patt
Externí odkaz:
http://arxiv.org/abs/2206.13304
Autor:
Girard-Satabin, Julien, Alberti, Michele, Bobot, François, Chihani, Zakaria, Lemesle, Augustin
Publikováno v:
AISafety, Jul 2022, Vienne, Austria
We present CAISAR, an open-source platform under active development for the characterization of AI systems' robustness and safety. CAISAR provides a unified entry point for defining verification problems by using WhyML, the mature and expressive lang
Externí odkaz:
http://arxiv.org/abs/2206.03044
Autor:
Girard-Satabin, Julien, Varasse, Aymeric, Schoenauer, Marc, Charpiat, Guillaume, Chihani, Zakaria
The impressive results of modern neural networks partly come from their non linear behaviour. Unfortunately, this property makes it very difficult to apply formal verification tools, even if we restrict ourselves to networks with a piecewise linear s
Externí odkaz:
http://arxiv.org/abs/2105.07776
The topic of provable deep neural network robustness has raised considerable interest in recent years. Most research has focused on adversarial robustness, which studies the robustness of perceptive models in the neighbourhood of particular samples.
Externí odkaz:
http://arxiv.org/abs/1911.10735