Zobrazeno 1 - 10
of 200
pro vyhledávání: '"VETERE, FRANK"'
Autor:
Singh, Ronal, Miller, Tim, Sonenberg, Liz, Velloso, Eduardo, Vetere, Frank, Howe, Piers, Dourish, Paul
In this paper, we introduce and evaluate a tool for researchers and practitioners to assess the actionability of information provided to users to support algorithmic recourse. While there are clear benefits of recourse from the user's perspective, th
Externí odkaz:
http://arxiv.org/abs/2407.09516
Autor:
Singh, Ronal, Dourish, Paul, Howe, Piers, Miller, Tim, Sonenberg, Liz, Velloso, Eduardo, Vetere, Frank
This paper investigates the prospects of using directive explanations to assist people in achieving recourse of machine learning decisions. Directive explanations list which specific actions an individual needs to take to achieve their desired outcom
Externí odkaz:
http://arxiv.org/abs/2102.02671
In this paper we introduce and evaluate a distal explanation model for model-free reinforcement learning agents that can generate explanations for `why' and `why not' questions. Our starting point is the observation that causal models can generate op
Externí odkaz:
http://arxiv.org/abs/2001.10284
Prevalent theories in cognitive science propose that humans understand and represent the knowledge of the world through causal relationships. In making sense of the world, we build causal models in our mind to encode cause-effect relations of events
Externí odkaz:
http://arxiv.org/abs/1905.10958
Explainable Artificial Intelligence (XAI) systems need to include an explanation model to communicate the internal decisions, behaviours and actions to the interacting humans. Successful explanation involves both cognitive and social processes. In th
Externí odkaz:
http://arxiv.org/abs/1903.02409
As artificial intelligence (AI) systems become increasingly complex and ubiquitous, these systems will be responsible for making decisions that directly affect individuals and society as a whole. Such decisions will need to be justified due to ethica
Externí odkaz:
http://arxiv.org/abs/1812.08597
To generate trust with their users, Explainable Artificial Intelligence (XAI) systems need to include an explanation model that can communicate the internal decisions, behaviours and actions to the interacting humans. Successful explanation involves
Externí odkaz:
http://arxiv.org/abs/1806.08055
Publikováno v:
In Artificial Intelligence July 2020 284
Autor:
Baker, Steven, Waycott, Jenny, Robertson, Elena, Carrasco, Romina, Neves, Barbara Barbosa, Hampson, Ralph, Vetere, Frank
Publikováno v:
In Information Processing and Management May 2020 57(3)
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.