Trust in and Acceptance of Artificial Intelligence Applications in Medicine: Mixed Methods Study.

Autor: Shevtsova D; Panaxea bv, Den Bosch, Netherlands.; Vrije Universiteit Amsterdam, Amsterdam, Netherlands., Ahmed A; Panaxea bv, Den Bosch, Netherlands., Boot IWA; Panaxea bv, Den Bosch, Netherlands., Sanges C; Universitätsklinikum Würzburg, Würzburg, Germany., Hudecek M; Universitätsklinikum Würzburg, Würzburg, Germany., Jacobs JJL; Ortec bv, Zoetermeer, Netherlands., Hort S; Fraunhofer Institute for Production Technology, Aachen, Germany., Vrijhoef HJM; Panaxea bv, Den Bosch, Netherlands.
Jazyk: angličtina
Zdroj: JMIR human factors [JMIR Hum Factors] 2024 Jan 17; Vol. 11, pp. e47031. Date of Electronic Publication: 2024 Jan 17.
DOI: 10.2196/47031
Abstrakt: Background: Artificial intelligence (AI)-powered technologies are being increasingly used in almost all fields, including medicine. However, to successfully implement medical AI applications, ensuring trust and acceptance toward such technologies is crucial for their successful spread and timely adoption worldwide. Although AI applications in medicine provide advantages to the current health care system, there are also various associated challenges regarding, for instance, data privacy, accountability, and equity and fairness, which could hinder medical AI application implementation.
Objective: The aim of this study was to identify factors related to trust in and acceptance of novel AI-powered medical technologies and to assess the relevance of those factors among relevant stakeholders.
Methods: This study used a mixed methods design. First, a rapid review of the existing literature was conducted, aiming to identify various factors related to trust in and acceptance of novel AI applications in medicine. Next, an electronic survey including the rapid review-derived factors was disseminated among key stakeholder groups. Participants (N=22) were asked to assess on a 5-point Likert scale (1=irrelevant to 5=relevant) to what extent they thought the various factors (N=19) were relevant to trust in and acceptance of novel AI applications in medicine.
Results: The rapid review (N=32 papers) yielded 110 factors related to trust and 77 factors related to acceptance toward AI technology in medicine. Closely related factors were assigned to 1 of the 19 overarching umbrella factors, which were further grouped into 4 categories: human-related (ie, the type of institution AI professionals originate from), technology-related (ie, the explainability and transparency of AI application processes and outcomes), ethical and legal (ie, data use transparency), and additional factors (ie, AI applications being environment friendly). The categorized 19 umbrella factors were presented as survey statements, which were evaluated by relevant stakeholders. Survey participants (N=22) represented researchers (n=18, 82%), technology providers (n=5, 23%), hospital staff (n=3, 14%), and policy makers (n=3, 14%). Of the 19 factors, 16 (84%) human-related, technology-related, ethical and legal, and additional factors were considered to be of high relevance to trust in and acceptance of novel AI applications in medicine. The patient's gender, age, and education level were found to be of low relevance (3/19, 16%).
Conclusions: The results of this study could help the implementers of medical AI applications to understand what drives trust and acceptance toward AI-powered technologies among key stakeholders in medicine. Consequently, this would allow the implementers to identify strategies that facilitate trust in and acceptance of medical AI applications among key stakeholders and potential users.
(©Daria Shevtsova, Anam Ahmed, Iris W A Boot, Carmen Sanges, Michael Hudecek, John J L Jacobs, Simon Hort, Hubertus J M Vrijhoef. Originally published in JMIR Human Factors (https://humanfactors.jmir.org), 17.01.2024.)
Databáze: MEDLINE