Explainable machine learning for decision support in healthcare: A scoping review (Preprint)

Autor: Michael Shulha, Samira Rahimi, Amrita Sandhu, Gauri Sharma, Vinita D'Souza, Rola Harmouche, Jordan Hovdebo
Rok vydání: 2022
DOI: 10.2196/preprints.39196
Popis: BACKGROUND The uptake of machine learning based decisions support has faced challenges in real world clinical scenarios. A key reason has been that clinicians lack trust in black box machine learning models. One approach to this challenge is through the use of explainable approaches to machine learning, which ideally allow an end user to understand why a specific prediction is being made. OBJECTIVE The study aimed to describe the scope of explainable machine learning (XML) research in clinical decision support, and identify approaches and frameworks that have been used to study end-user perceptions of explainability. METHODS Following PRISMA guidelines, a search protocol was developed and executed in Ovid MEDLINE ALL(R), EMBASE Classic + EMBASE, Web of Science Core Collection, CINAHL Cochrane Library CENTRAL (Trials) to identify eligible articles. Studies describing the testing, piloting, or implementation of explainable machine learning tools designed to support clinical decision making were eligible for synthesis. We summarized the scope of machine learning methods, the clinical scope, intended end user, and decision focus. In a sub analysis, we also summarized the design and visual elements employed by researchers and the associated methodological approaches used to assess end-user perceptions of explainability. Finally, we conducted a thematic analysis to better understand the perceived potential health system benefits, and clinical end-user benefits with explainable machine learning based decision support. RESULTS We found the majority of studies focused on the development of tools for doctors as the intended users (85%) for diagnostic support (45%) in the context of secondary care (55%). Explainability methods were highly varied with the majority of studies using a unique explainability model (76%). Only 12% discussed some type of testing phase to assess the suitability of explainability methods with clinical end users. Improved end-user trust in machine learning and AI tools was the most common cited potential benefit. CONCLUSIONS The majority of research appears focused on the mechanics of developing explainable machine learning models, with little attention paid to the clinical end-user experience. While increased trust in machine learning tools is often cited as a potential outcome of well implemented explainability, there is little discussion of how this can be effectively measured and operationalized. Ultimately, improved alignment between research, implementation, and medical education will serve to benefit the advancement of XML for clinical decision support and the capacity of these types of tools to benefit healthcare.
Databáze: OpenAIRE