Autor: |
Shah-Mohammadi F; Department of Biomedical Informatics, School of Medicine, University of Utah, Salt Lake City, UT 84112, USA., Finkelstein J; Department of Biomedical Informatics, School of Medicine, University of Utah, Salt Lake City, UT 84112, USA. |
Jazyk: |
angličtina |
Zdroj: |
Diagnostics (Basel, Switzerland) [Diagnostics (Basel)] 2024 Aug 15; Vol. 14 (16). Date of Electronic Publication: 2024 Aug 15. |
DOI: |
10.3390/diagnostics14161779 |
Abstrakt: |
In emergency department (ED) settings, rapid and precise diagnostic evaluations are critical to ensure better patient outcomes and efficient healthcare delivery. This study assesses the accuracy of differential diagnosis lists generated by the third-generation ChatGPT (ChatGPT-3.5) and the fourth-generation ChatGPT (ChatGPT-4) based on electronic health record notes recorded within the first 24 h of ED admission. These models process unstructured text to formulate a ranked list of potential diagnoses. The accuracy of these models was benchmarked against actual discharge diagnoses to evaluate their utility as diagnostic aids. Results indicated that both GPT-3.5 and GPT-4 reasonably accurately predicted diagnoses at the body system level, with GPT-4 slightly outperforming its predecessor. However, their performance at the more granular category level was inconsistent, often showing decreased precision. Notably, GPT-4 demonstrated improved accuracy in several critical categories that underscores its advanced capabilities in managing complex clinical scenarios. |
Databáze: |
MEDLINE |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|