Supporting Trust Calibration and the Effective Use of Decision Aids by Presenting Dynamic System Confidence Information
Autor: | John M. McGuirl, Nadine B. Sarter |
---|---|
Rok vydání: | 2006 |
Předmět: |
Adult
Male Decision support system Aircraft Operations research Calibration (statistics) Poison control Human Factors and Ergonomics Trust Flight simulator 050105 experimental psychology Automation User-Computer Interface Behavioral Neuroscience Task Performance and Analysis Odds Ratio Decision aids Humans 0501 psychology and cognitive sciences 050107 human factors Applied Psychology Reliability (statistics) business.industry Research 05 social sciences Middle Aged Variety (cybernetics) Logistic Models Accidents Aviation Female Neural Networks Computer Aviation business Psychology |
Zdroj: | Human Factors: The Journal of the Human Factors and Ergonomics Society. 48:656-665 |
ISSN: | 1547-8181 0018-7208 |
DOI: | 10.1518/001872006779166334 |
Popis: | Objective: To examine whether continually updated information about a system's confidence in its ability to perform assigned tasks improves operators' trust calibration in, and use of, an automated decision support system (DSS). Background: The introduction of decision aids often leads to performance breakdowns that are related to automation bias and trust miscalibration. This can be explained, in part, by the fact that operators are informed about overall system reliability only, which makes it impossible for them to decide on a case-by-case basis whether to follow the system's advice. Method: The application for this research was a neural net-based decision aid that assists pilots with detecting and handling in-flight icing encounters. A multifactorial experiment was carried out with two groups of 15 instructor pilots each flying a series of 28 approaches in a motion-base simulator. One group was informed about the system's overall reliability only, whereas the other group received updated system confidence information. Results: Pilots in the updated group experienced significantly fewer icing-related stalls and were more likely to reverse their initial response to an icing condition when it did not produce desired results. Their estimate of the system's accuracy was more accurate than that of the fixed group. Conclusion: The presentation of continually updated system confidence information can improve trust calibration and thus lead to better performance of the human-machine team. Application: The findings from this research can inform the design of decision support systems in a variety of event-driven high-tempo domains. |
Databáze: | OpenAIRE |
Externí odkaz: |