One counterfactual does not make an explanation
Autor: | Raphaela Butz, Arjen Hommersom, Marco Barenkamp, Hans van Ditmarsch |
---|---|
Přispěvatelé: | RS-Research Program Towards High-Quality and Intelligent Software (THIS), Department of Computer Science, RS-Research Line Artificial intelligence (part of THIS program), RS-Research Line Resilience (part of LIRS program) |
Jazyk: | angličtina |
Rok vydání: | 2022 |
Zdroj: | STARTPAGE=1;ENDPAGE=11;TITLE=BNAIC/BeNeLearn 2022 Open Universiteit Butz, R, Hommersom, A, Barenkamp, M & van Ditmarsch, H 2022, ' One counterfactual does not make an explanation ', Paper presented at BNAIC/BeNeLearn 2022, Mechelen, Belgium, 7/11/22-9/11/22 pp. 1-11 . < https://bnaic2022.uantwerpen.be/BNAICBeNeLearn_2022_submission_6245 > |
Popis: | Counterfactual explanations gained popularity in artificialintelligence over the last years. It is well-known that it is possible togenerate counterfactuals from causal Bayesian networks, but there is noindication which of them are useful for explanatory purposes. In thispaper, we examine what type of counterfactuals are perceived as moreuseful explanations for the end user. For this purpose we have conducteda questionnaire to test whether counterfactuals that change an actionable cause are considered more useful than counterfactuals that changea direct cause. The results of the questionnaire showed that actionablecounterfactuals are preferred regardless of being the direct cause or having a longer causal chain. |
Databáze: | OpenAIRE |
Externí odkaz: |