Zobrazeno 1 - 10
of 719
pro vyhledávání: '"Keane, Mark A."'
Autor:
McAleese, Stephen, Keane, Mark
Counterfactual explanations can be used to interpret and debug text classifiers by producing minimally altered text inputs that change a classifier's output. In this work, we evaluate five methods for generating counterfactual explanations for a BERT
Externí odkaz:
http://arxiv.org/abs/2411.02643
Autor:
Aryal, Saugat, Keane, Mark T.
Publikováno v:
32nd International Conference on Case-Based Reasoning (ICCBR) 2024, Merida, Mexico
Recently, counterfactuals using "if-only" explanations have become very popular in eXplainable AI (XAI), as they describe which changes to feature-inputs of a black-box AI system result in changes to a (usually negative) decision-outcome. Even more r
Externí odkaz:
http://arxiv.org/abs/2403.00980
Explainable AI (XAI) has been proposed as a valuable tool to assist in downstream tasks involving human and AI collaboration. Perhaps the most psychologically valid XAI techniques are case based approaches which display 'whole' exemplars to explain t
Externí odkaz:
http://arxiv.org/abs/2311.03246
Publikováno v:
ECML PKDD 2018. Lecture Notes in Computer Science(), vol 11053. Springer, Cham
We present a text mining system to support the exploration of large volumes of text detailing the findings of government inquiries. Despite their historical significance and potential societal impact, key findings of inquiries are often hidden within
Externí odkaz:
http://arxiv.org/abs/2308.02556
Counterfactual explanations are an increasingly popular form of post hoc explanation due to their (i) applicability across problem domains, (ii) proposed legal compliance (e.g., with GDPR), and (iii) reliance on the contrastive nature of human explan
Externí odkaz:
http://arxiv.org/abs/2303.09297
Autor:
Aryal, Saugat, Keane, Mark T
Publikováno v:
32nd International Joint Conference on Artificial Intelligence (IJCAI-23), China, Macao, 2023
Recently, eXplainable AI (XAI) research has focused on counterfactual explanations as post-hoc justifications for AI-system decisions (e.g. a customer refused a loan might be told: If you asked for a loan with a shorter term, it would have been appro
Externí odkaz:
http://arxiv.org/abs/2301.11970
Autor:
Ford, Courtney, Keane, Mark T
Very few eXplainable AI (XAI) studies consider how users understanding of explanations might change depending on whether they know more or less about the to be explained domain (i.e., whether they differ in their expertise). Yet, expertise is a criti
Externí odkaz:
http://arxiv.org/abs/2212.09342
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems due to their psychological validity, flexibility across problem domains and proposed
Externí odkaz:
http://arxiv.org/abs/2212.08733
Publikováno v:
IJCAI-22 Workshop on Cognitive Aspects of Knowledge Representation (2022)
Counterfactual explanations are increasingly used to address interpretability, recourse, and bias in AI decisions. However, we do not know how well counterfactual explanations help users to understand a systems decisions, since no large scale user st
Externí odkaz:
http://arxiv.org/abs/2204.10152
Autor:
Temraz, Mohammed, Keane, Mark T.
Learning from class imbalanced datasets poses challenges for many machine learning algorithms. Many real-world domains are, by definition, class imbalanced by virtue of having a majority class that naturally has many more instances than its minority
Externí odkaz:
http://arxiv.org/abs/2111.03516