Investigating the Accuracy and Completeness of an Artificial Intelligence Large Language Model About Uveitis: An Evaluation of ChatGPT.

Autor: Marshall RF; The Drexel University College of Medicine, Philadelphia, Pennsylvania, USA., Mallem K; The Drexel University College of Medicine, Philadelphia, Pennsylvania, USA., Xu H; University of California San Diego, San Diego, California, USA., Thorne J; The Wilmer Eye Institute, Division of Ocular Immunology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA., Burkholder B; The Wilmer Eye Institute, Division of Ocular Immunology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA., Chaon B; The Wilmer Eye Institute, Division of Ocular Immunology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA., Liberman P; The Wilmer Eye Institute, Division of Ocular Immunology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA., Berkenstock M; The Wilmer Eye Institute, Division of Ocular Immunology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA.
Jazyk: angličtina
Zdroj: Ocular immunology and inflammation [Ocul Immunol Inflamm] 2024 Nov; Vol. 32 (9), pp. 2052-2055. Date of Electronic Publication: 2024 Feb 23.
DOI: 10.1080/09273948.2024.2317417
Abstrakt: Purpose: To assess the accuracy and completeness of ChatGPT-generated answers regarding uveitis description, prevention, treatment, and prognosis.
Methods: Thirty-two uveitis-related questions were generated by a uveitis specialist and inputted into ChatGPT 3.5. Answers were compiled into a survey and were reviewed by five uveitis specialists using standardized Likert scales of accuracy and completeness.
Results: In total, the median accuracy score for all the uveitis questions ( n  = 32) was 4.00 (between "more correct than incorrect" and "nearly all correct"), and the median completeness score was 2.00 ("adequate, addresses all aspects of the question and provides the minimum amount of information required to be considered complete"). The interrater variability assessment had a total kappa value of 0.0278 for accuracy and 0.0847 for completeness.
Conclusion: ChatGPT can provide relatively high accuracy responses for various questions related to uveitis; however, the answers it provides are incomplete, with some inaccuracies. Its utility in providing medical information requires further validation and development prior to serving as a source of uveitis information for patients.
Databáze: MEDLINE