Autor: |
Hauptman, Allyson I., Schelble, Beau G., Duan, Wen, Flathmann, Christopher, McNeese, Nathan J. |
Předmět: |
|
Zdroj: |
Cognition, Technology & Work; Sep2024, Vol. 26 Issue 3, p435-455, 21p |
Abstrakt: |
An obstacle to effective teaming between humans and AI is the agent's "black box" design. AI explanations have proven benefits, but few studies have explored the effects that explanations can have in a teaming environment with AI agents operating at heightened levels of autonomy. We conducted two complementary studies, an experiment and participatory design sessions, investigating the effect that varying levels of AI explainability and AI autonomy have on the participants' perceived trust and competence of an AI teammate to address this research gap. The results of the experiment were counter-intuitive, where the participants actually perceived the lower explainability agent as both more trustworthy and more competent. The participatory design sessions further revealed how a team's need to know influences when and what teammates need explained from AI teammates. Based on these findings, several design recommendations were developed for the HCI community to guide how AI teammates should share decision information with their human counterparts considering the careful balance between trust and competence in human-AI teams. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|