Assessing the medical reasoning skills of GPT-4 in complex ophthalmology cases.

Autor: Milad, Daniel, Antaki, Fares, Milad, Jason, Farah, Andrew, Khairy, Thomas, Mikhail, David, Giguère, Charles-Édouard, Touma, Samir, Bernstein, Allison, Szigiato, Andrei-Alexandru, Nayman, Taylor, Mullie, Guillaume A., Duval, Renaud
Zdroj: British Journal of Ophthalmology; Oct2024, Vol. 108 Issue 10, p1398-1405, 10p
Abstrakt: Background/aims: This study assesses the proficiency of Generative Pre-trained Transformer (GPT)-4 in answering questions about complex clinical ophthalmology cases. Methods: We tested GPT-4 on 422 Journal of the American Medical Association Ophthalmology Clinical Challenges, and prompted the model to determine the diagnosis (open-ended question) and identify the next-step (multiple-choice question). We generated responses using two zero-shot prompting strategies, including zero-shot plan-and-solve+ (PS+), to improve the reasoning of the model. We compared the best-performing model to human graders in a benchmarking effort. Results: Using PS+ prompting, GPT-4 achieved mean accuracies of 48.0% (95% CI (43.1% to 52.9%)) and 63.0% (95% CI (58.2% to 67.6%)) in diagnosis and next step, respectively. Next-step accuracy did not significantly differ by subspecialty (p=0.44). However, diagnostic accuracy in pathology and tumours was significantly higher than in uveitis (p=0.027). When the diagnosis was accurate, 75.2% (95% CI (68.6% to 80.9%)) of the next steps were correct. Conversely, when the diagnosis was incorrect, 50.2% (95% CI (43.8% to 56.6%)) of the next steps were accurate. The next step was three times more likely to be accurate when the initial diagnosis was correct (p<0.001). No significant differences were observed in diagnostic accuracy and decision-making between board-certified ophthalmologists and GPT-4. Among trainees, senior residents outperformed GPT-4 in diagnostic accuracy (p=0.001 and 0.049) and in accuracy of next step (p=0.002 and 0.020). Conclusion: Improved prompting enhances GPT-4's performance in complex clinical situations, although it does not surpass ophthalmology trainees in our context. Specialised large language models hold promise for future assistance in medical decision-making and diagnosis. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index