Zobrazeno 1 - 10
of 30
pro vyhledávání: '"Leofante, Francesco"'
We study the problem of assessing the robustness of counterfactual explanations for deep learning models. We focus on $\textit{plausible model shifts}$ altering model parameters and propose a novel framework to reason about the robustness property in
Externí odkaz:
http://arxiv.org/abs/2407.07482
Autor:
Leofante, Francesco, Ayoobi, Hamed, Dejl, Adam, Freedman, Gabriel, Gorur, Deniz, Jiang, Junqi, Paulino-Passos, Guilherme, Rago, Antonio, Rapberger, Anna, Russo, Fabrizio, Yin, Xiang, Zhang, Dekai, Toni, Francesca
AI has become pervasive in recent years, but state-of-the-art approaches predominantly neglect the need for AI systems to be contestable. Instead, contestability is advocated by AI guidelines (e.g. by the OECD) and regulation of automated decision-ma
Externí odkaz:
http://arxiv.org/abs/2405.10729
Counterfactual Explanations (CEs) have emerged as a major paradigm in explainable AI research, providing recourse recommendations for users affected by the decisions of machine learning models. However, CEs found by existing methods often become inva
Externí odkaz:
http://arxiv.org/abs/2404.13736
Counterfactual explanations (CEs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models. While CEs can be beneficial to affected individuals, recent work has expose
Externí odkaz:
http://arxiv.org/abs/2402.01928
Model Multiplicity (MM) arises when multiple, equally performing machine learning models can be trained to solve the same prediction task. Recent studies show that models obtained under MM may produce inconsistent predictions for the same input. When
Externí odkaz:
http://arxiv.org/abs/2312.15097
Autor:
Leofante, Francesco, Potyka, Nico
Counterfactual explanations shed light on the decisions of black-box models by explaining how an input can be altered to obtain a favourable decision from the model (e.g., when a loan application has been rejected). However, as noted recently, counte
Externí odkaz:
http://arxiv.org/abs/2312.06564
Counterfactual Explanations (CEs) have received increasing interest as a major methodology for explaining neural network classifiers. Usually, CEs for an input-output pair are defined as data points with minimum distance to the input that are classif
Externí odkaz:
http://arxiv.org/abs/2309.12545
The use of counterfactual explanations (CFXs) is an increasingly popular explanation strategy for machine learning models. However, recent studies have shown that these explanations may not be robust to changes in the underlying model (e.g., followin
Externí odkaz:
http://arxiv.org/abs/2208.14878
Publikováno v:
EPTCS 361, 2022, pp. 61-77
A swarm robotic system consists of a team of robots performing cooperative tasks without any centralized coordination. In principle, swarms enable flexible and scalable solutions; however, designing individual control algorithms that can guarantee a
Externí odkaz:
http://arxiv.org/abs/2207.06758
Verification of deep neural networks has witnessed a recent surge of interest, fueled by success stories in diverse domains and by abreast concerns about safety and security in envisaged applications. Complexity and sheer size of such networks are ch
Externí odkaz:
http://arxiv.org/abs/2003.07636