Using Natural Language Processing to Visualize Narrative Feedback in a Medical Student Performance Dashboard.

Autor: Maimone C; C. Maimone is associate director of research data services, Northwestern University, Evanston, Illinois., Dolan BM; B.M. Dolan is associate professor of medicine and medical education and assistant dean for assessment, Northwestern University Feinberg School of Medicine, Chicago, Illinois., Green MM; M.M. Green is Raymond H. Curry, MD Professor of Medical Education, professor of medicine, and vice dean for education, Northwestern University Feinberg School of Medicine, Chicago, Illinois., Sanguino SM; S.M. Sanguino is associate professor of pediatrics and senior associate dean of medical education, Northwestern University Feinberg School of Medicine, Chicago, Illinois., O'Brien CL; C.L. O'Brien is assistant professor of medical education and assistant dean of program evaluation and accreditation, Northwestern University Feinberg School of Medicine, Chicago, Illinois.
Jazyk: angličtina
Zdroj: Academic medicine : journal of the Association of American Medical Colleges [Acad Med] 2024 Jul 03. Date of Electronic Publication: 2024 Jul 03.
DOI: 10.1097/ACM.0000000000005800
Abstrakt: Problem: Clinical competency committees rely on narrative feedback for important insight into learner performance, but reviewing comments can be time-consuming. Techniques such as natural language processing (NLP) could create efficiencies in narrative feedback review. In this study, the authors explored whether using NLP to create a visual dashboard of narrative feedback to preclerkship medical students would improve the competency review efficiency.
Approach: Preclerkship competency review data collected at the Northwestern University Feinberg School of Medicine from 2014 to 2021 were used to identify relevant features of narrative data associated with review outcome (ready or not ready) and draft visual summary reports of the findings. A user needs analysis was held with experienced reviewers to better understand work processes in December 2019. Dashboards were designed based on this input to help reviewers efficiently navigate large amounts of narrative data. The dashboards displayed the model's prediction of the review outcome along with visualizations of how narratives in a student's portfolio compared with previous students' narratives. Excerpts of the most relevant comments were also provided. Six faculty reviewers who comprised the competency committee in spring 2023 were surveyed on the dashboard's utility.
Outcomes: Reviewers found the predictive component of the dashboard most useful. Only 1 of 6 reviewers (17%) agreed that the dashboard improved process efficiency. However, 3 (50%) thought the visuals made them more confident in decisions about competence, and 3 (50%) thought they would use the visual summaries for future reviews. The outcomes highlight limitations of visualizing and summarizing narrative feedback in a comprehensive assessment system.
Next Steps: Future work will explore how to optimize the dashboards to meet reviewer needs. Ongoing advancements in large language models may facilitate these efforts. Opportunities to collaborate with other institutions to apply the model to an external context will also be sought.
(Copyright © 2024 the Association of American Medical Colleges.)
Databáze: MEDLINE