ChatGPT's adherence to otolaryngology clinical practice guidelines.

Autor: Tessler I; Department of Otolaryngology and Head and Neck Surgery, Sheba Medical Center, Ramat Gan, Israel. idit.tessler@gmail.com.; School of Medicine, Tel Aviv University, Tel Aviv, Israel. idit.tessler@gmail.com.; ARC Innovation Center, Sheba Medical Center, Ramat Gan, Israel. idit.tessler@gmail.com., Wolfovitz A; Department of Otolaryngology and Head and Neck Surgery, Sheba Medical Center, Ramat Gan, Israel.; School of Medicine, Tel Aviv University, Tel Aviv, Israel., Alon EE; Department of Otolaryngology and Head and Neck Surgery, Sheba Medical Center, Ramat Gan, Israel.; School of Medicine, Tel Aviv University, Tel Aviv, Israel., Gecel NA; School of Medicine, Tel Aviv University, Tel Aviv, Israel., Livneh N; Department of Otolaryngology and Head and Neck Surgery, Sheba Medical Center, Ramat Gan, Israel.; School of Medicine, Tel Aviv University, Tel Aviv, Israel., Zimlichman E; School of Medicine, Tel Aviv University, Tel Aviv, Israel.; ARC Innovation Center, Sheba Medical Center, Ramat Gan, Israel.; The Sheba Talpiot Medical Leadership Program, Ramat Gan, Israel.; Hospital Management, Sheba Medical Center, Ramat Gan, Israel., Klang E; The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, USA.
Jazyk: angličtina
Zdroj: European archives of oto-rhino-laryngology : official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS) : affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery [Eur Arch Otorhinolaryngol] 2024 Jul; Vol. 281 (7), pp. 3829-3834. Date of Electronic Publication: 2024 Apr 22.
DOI: 10.1007/s00405-024-08634-9
Abstrakt: Objectives: Large language models, including ChatGPT, has the potential to transform the way we approach medical knowledge, yet accuracy in clinical topics is critical. Here we assessed ChatGPT's performance in adhering to the American Academy of Otolaryngology-Head and Neck Surgery guidelines.
Methods: We presented ChatGPT with 24 clinical otolaryngology questions based on the guidelines of the American Academy of Otolaryngology. This was done three times (N = 72) to test the model's consistency. Two otolaryngologists evaluated the responses for accuracy and relevance to the guidelines. Cohen's Kappa was used to measure evaluator agreement, and Cronbach's alpha assessed the consistency of ChatGPT's responses.
Results: The study revealed mixed results; 59.7% (43/72) of ChatGPT's responses were highly accurate, while only 2.8% (2/72) directly contradicted the guidelines. The model showed 100% accuracy in Head and Neck, but lower accuracy in Rhinology and Otology/Neurotology (66%), Laryngology (50%), and Pediatrics (8%). The model's responses were consistent in 17/24 (70.8%), with a Cronbach's alpha value of 0.87, indicating a reasonable consistency across tests.
Conclusions: Using a guideline-based set of structured questions, ChatGPT demonstrates consistency but variable accuracy in otolaryngology. Its lower performance in some areas, especially Pediatrics, suggests that further rigorous evaluation is needed before considering real-world clinical use.
(© 2024. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.)
Databáze: MEDLINE