Diagnostic accuracy of vision-language models on Japanese diagnostic radiology, nuclear medicine, and interventional radiology specialty board examinations.

Autor: Oura, Tatsushi, Tatekawa, Hiroyuki, Horiuchi, Daisuke, Matsushita, Shu, Takita, Hirotaka, Atsukawa, Natsuko, Mitsuyama, Yasuhito, Yoshida, Atsushi, Murai, Kazuki, Tanaka, Rikako, Shimono, Taro, Yamamoto, Akira, Miki, Yukio, Ueda, Daiju
Zdroj: Japanese Journal of Radiology; Dec2024, Vol. 42 Issue 12, p1392-1398, 7p
Abstrakt: Purpose: The performance of vision-language models (VLMs) with image interpretation capabilities, such as GPT-4 omni (GPT-4o), GPT-4 vision (GPT-4V), and Claude-3, has not been compared and remains unexplored in specialized radiological fields, including nuclear medicine and interventional radiology. This study aimed to evaluate and compare the diagnostic accuracy of various VLMs, including GPT-4 + GPT-4V, GPT-4o, Claude-3 Sonnet, and Claude-3 Opus, using Japanese diagnostic radiology, nuclear medicine, and interventional radiology (JDR, JNM, and JIR, respectively) board certification tests. Materials and methods: In total, 383 questions from the JDR test (358 images), 300 from the JNM test (92 images), and 322 from the JIR test (96 images) from 2019 to 2023 were consecutively collected. The accuracy rates of the GPT-4 + GPT-4V, GPT-4o, Claude-3 Sonnet, and Claude-3 Opus were calculated for all questions or questions with images. The accuracy rates of the VLMs were compared using McNemar's test. Results: GPT-4o demonstrated the highest accuracy rates across all evaluations with the JDR (all questions, 49%; questions with images, 48%), JNM (all questions, 64%; questions with images, 59%), and JIR tests (all questions, 43%; questions with images, 34%), followed by Claude-3 Opus with the JDR (all questions, 40%; questions with images, 38%), JNM (all questions, 42%; questions with images, 43%), and JIR tests (all questions, 40%; questions with images, 30%). For all questions, McNemar's test showed that GPT-4o significantly outperformed the other VLMs (all P < 0.007), except for Claude-3 Opus in the JIR test. For questions with images, GPT-4o outperformed the other VLMs in the JDR and JNM tests (all P < 0.001), except Claude-3 Opus in the JNM test. Conclusion: The GPT-4o had the highest success rates for questions with images and all questions from the JDR, JNM, and JIR board certification tests. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index