Testing and Evaluation of Health Care Applications of Large Language Models: A Systematic Review.

Autor: Bedi S; Department of Biomedical Data Science, Stanford School of Medicine, Stanford, California., Liu Y; Clinical Excellence Research Center, Stanford University, Stanford, California., Orr-Ewing L; Clinical Excellence Research Center, Stanford University, Stanford, California., Dash D; Clinical Excellence Research Center, Stanford University, Stanford, California.; Center for Biomedical Informatics Research, Stanford University, Stanford, California., Koyejo S; Department of Computer Science, Stanford University, Stanford, California., Callahan A; Center for Biomedical Informatics Research, Stanford University, Stanford, California., Fries JA; Center for Biomedical Informatics Research, Stanford University, Stanford, California., Wornow M; Center for Biomedical Informatics Research, Stanford University, Stanford, California., Swaminathan A; Center for Biomedical Informatics Research, Stanford University, Stanford, California., Lehmann LS; Department of Medicine, Harvard Medical School, Boston, Massachusetts., Hong HJ; Department of Anesthesiology, Stanford University, Stanford, California., Kashyap M; Stanford University School of Medicine, Stanford, California., Chaurasia AR; Center for Biomedical Informatics Research, Stanford University, Stanford, California., Shah NR; Clinical Excellence Research Center, Stanford University, Stanford, California., Singh K; Digital Health Innovation, University of California San Diego Health, San Diego., Tazbaz T; Digital Health Center of Excellence, US Food and Drug Administration, Washington, DC., Milstein A; Clinical Excellence Research Center, Stanford University, Stanford, California., Pfeffer MA; Department of Medicine, Stanford University School of Medicine, Stanford, California., Shah NH; Clinical Excellence Research Center, Stanford University, Stanford, California.; Center for Biomedical Informatics Research, Stanford University, Stanford, California.
Jazyk: angličtina
Zdroj: JAMA [JAMA] 2024 Oct 15. Date of Electronic Publication: 2024 Oct 15.
DOI: 10.1001/jama.2024.21700
Abstrakt: Importance: Large language models (LLMs) can assist in various health care activities, but current evaluation approaches may not adequately identify the most useful application areas.
Objective: To summarize existing evaluations of LLMs in health care in terms of 5 components: (1) evaluation data type, (2) health care task, (3) natural language processing (NLP) and natural language understanding (NLU) tasks, (4) dimension of evaluation, and (5) medical specialty.
Data Sources: A systematic search of PubMed and Web of Science was performed for studies published between January 1, 2022, and February 19, 2024.
Study Selection: Studies evaluating 1 or more LLMs in health care.
Data Extraction and Synthesis: Three independent reviewers categorized studies via keyword searches based on the data used, the health care tasks, the NLP and NLU tasks, the dimensions of evaluation, and the medical specialty.
Results: Of 519 studies reviewed, published between January 1, 2022, and February 19, 2024, only 5% used real patient care data for LLM evaluation. The most common health care tasks were assessing medical knowledge such as answering medical licensing examination questions (44.5%) and making diagnoses (19.5%). Administrative tasks such as assigning billing codes (0.2%) and writing prescriptions (0.2%) were less studied. For NLP and NLU tasks, most studies focused on question answering (84.2%), while tasks such as summarization (8.9%) and conversational dialogue (3.3%) were infrequent. Almost all studies (95.4%) used accuracy as the primary dimension of evaluation; fairness, bias, and toxicity (15.8%), deployment considerations (4.6%), and calibration and uncertainty (1.2%) were infrequently measured. Finally, in terms of medical specialty area, most studies were in generic health care applications (25.6%), internal medicine (16.4%), surgery (11.4%), and ophthalmology (6.9%), with nuclear medicine (0.6%), physical medicine (0.4%), and medical genetics (0.2%) being the least represented.
Conclusions and Relevance: Existing evaluations of LLMs mostly focus on accuracy of question answering for medical examinations, without consideration of real patient care data. Dimensions such as fairness, bias, and toxicity and deployment considerations received limited attention. Future evaluations should adopt standardized applications and metrics, use clinical data, and broaden focus to include a wider range of tasks and specialties.
Databáze: MEDLINE