Zobrazeno 1 - 10
of 313
pro vyhledávání: '"Robert R. Hoffman"'
Publikováno v:
Frontiers in Computer Science, Vol 5 (2023)
IntroductionThe purpose of the Stakeholder Playbook is to enable the developers of explainable AI systems to take into account the different ways in which different stakeholders or role-holders need to “look inside” the AI/XAI systems.MethodWe co
Externí odkaz:
https://doaj.org/article/f699f78914d745adb251e3955c25c686
Publikováno v:
Frontiers in Computer Science, Vol 5 (2023)
IntroductionMany Explainable AI (XAI) systems provide explanations that are just clues or hints about the computational models-Such things as feature lists, decision trees, or saliency images. However, a user might want answers to deeper questions su
Externí odkaz:
https://doaj.org/article/d0f1e68f2d66494da438531b08fa48dd
Publikováno v:
Frontiers in Psychology, Vol 14 (2023)
When people make plausibility judgments about an assertion, an event, or a piece of evidence, they are gauging whether it makes sense that the event could transpire as it did. Therefore, we can treat plausibility judgments as a part of sensemaking. I
Externí odkaz:
https://doaj.org/article/701cf2e8f25047ceb636e1efe602e98e
Publikováno v:
Frontiers in Computer Science, Vol 5 (2023)
If a user is presented an AI system that portends to explain how it works, how do we know whether the explanation works and the user has achieved a pragmatic understanding of the AI? This question entails some key concepts of measurement such as expl
Externí odkaz:
https://doaj.org/article/81e1ca453dd240bba3353096b1868aee
Autor:
William J. Clancey, Robert R. Hoffman
Publikováno v:
Applied AI Letters, Vol 2, Iss 4, Pp n/a-n/a (2021)
Abstract The DARPA Explainable Artificial Intelligence (AI) (XAI) Program focused on generating explanations for AI programs that use machine learning techniques. This article highlights progress during the DARPA Program (2017‐2021) relative to res
Externí odkaz:
https://doaj.org/article/7420ee2b1e3c428f975b914a3cc8afe9
Autor:
Paul Ward, Robert R. Hoffman, Gareth E. Conway, Jan Maarten Schraagen, David Peebles, Robert J. B. Hutton, Erich J. Petushek
Publikováno v:
Frontiers in Psychology, Vol 8 (2017)
Externí odkaz:
https://doaj.org/article/22551d6e4ee64a9bb6c8ee5bd3ad9680
Publikováno v:
The American Journal of Psychology. 135:365-378
A challenge in building useful artificial intelligence (AI) systems is that people need to understand how they work in order to achieve appropriate trust and reliance. This has become a topic of considerable interest, manifested as a surge of researc
Publikováno v:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 66:1270-1274
Modern artificial intelligence (AI) and machine learning (ML) systems have become more capable and more widely used, but often involve underlying processes their users do not understand and may not trust. Some researchers have addressed this by devel
Publikováno v:
Journal of Cognitive Engineering and Decision Making. 15:213-232
The process of explaining something to another person is more than offering a statement. Explaining means taking the perspective and knowledge of the Learner into account and determining whether the Learner is satisfied. While the nature of explanati
Publikováno v:
Journal of Intelligence History. 20:45-59
The key to effective 21st century intelligence is our sensemaking process. In this article, we present a case for why Cold War-era reductive intelligence models have become obsolete, and we show th...