Autor: |
de Visser EJ; Human Factors and Applied Cognition, Department of Psychology, George Mason University, Fairfax, VA, United States.; Warfighter Effectiveness Research Center, Department of Behavioral Sciences and Leadership, United States Air Force Academy, Colorado Springs, CO, United States., Beatty PJ; Cognitive and Behavioral Neuroscience, Department of Psychology, George Mason University, Fairfax, VA, United States., Estepp JR; 711 Human Performance Wing/RHCPA, Air Force Research Laboratory, Wright-Patterson Air Force Base, Dayton, OH, United States., Kohn S; Human Factors and Applied Cognition, Department of Psychology, George Mason University, Fairfax, VA, United States., Abubshait A; Human Factors and Applied Cognition, Department of Psychology, George Mason University, Fairfax, VA, United States., Fedota JR; Intramural Research Program, National Institute on Drug Abuse, National Institutes of Health, Baltimore, MD, United States., McDonald CG; Cognitive and Behavioral Neuroscience, Department of Psychology, George Mason University, Fairfax, VA, United States. |
Abstrakt: |
With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration , remains a fundamental challenge in research investigating the use of automation. Due to the context-dependent nature of trust, universal measures of trust have not been established. Trust is a difficult construct to investigate because even the act of reflecting on how much a person trusts a certain agent can change the perception of that agent. We hypothesized that electroencephalograms (EEGs) would be able to provide such a universal index of trust without the need of self-report. In this work, EEGs were recorded for 21 participants (mean age = 22.1; 13 females) while they observed a series of algorithms perform a modified version of a flanker task. Each algorithm's degree of credibility and reliability were manipulated. We hypothesized that neural markers of action monitoring, such as the observational error-related negativity (oERN) and observational error positivity (oPe), are potential candidates for monitoring computer algorithm performance. Our findings demonstrate that (1) it is possible to reliably elicit both the oERN and oPe while participants monitored these computer algorithms, (2) the oPe, as opposed to the oERN, significantly distinguished between high and low reliability algorithms, and (3) the oPe significantly correlated with subjective measures of trust. This work provides the first evidence for the utility of neural correlates of error monitoring for examining trust in computer algorithms. |