Automation Bias in AI-Decision Support: Results from an Empirical Study.

Autor: KÜCKING, Florian, HÜBNER, Ursula, PRZYSUCHA, Mareike, HANNEMANN, Niels, KUTZA, Jan-Oliver, MOELLEKEN, Maurice, ERFURT-BERGE, Cornelia, DISSEMOND, Joachim, BABITSCH, Birgit, BUSCH, Dorothee
Zdroj: Studies in Health Technology & Informatics; 2024, Vol. 317, p298-304, 7p
Abstrakt: Introduction Automation bias poses a significant challenge to the effectiveness of Clinical Decision Support Systems (CDSS), potentially compromising diagnostic accuracy. Previous research highlights trust, selfconfidence, and task difficulty as key determinants. With the increasing availability of AI-enabled CDSS, automation bias attains new attention. This study therefore aims to identify factors influencing automation bias in a diagnostic task. Methods A quantitative intervention study with participants from different backgrounds (n = 210) was conducted, employing regression analysis to analyze potential factors. Automation bias was measured as the agreement rate with wrong AI-enabled recommendations. Results and Discussion Diagnostic performance, certified wound care training, physician profession, and female gender significantly reduced false agreement rates. Higher perceived benefit of the system was significantly associated with promoting false agreement. Strategies like comprehensive diagnostic training are pivotal in the prevention of automation bias when implementing CDSS. Conclusion Considering factors influencing automation bias when introducing a CDSS is critical to fully leverage the benefits of such a system. This study highlights that non-specialists, who stand to gain the most from CDSS, are also the most susceptible to automation bias, emphasizing the need for specialized training to mitigate this risk and ensure diagnostic accuracy and patient safety. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index