Evaluating the Performance of ChatGPT in Urology: A Comparative Study of Knowledge Interpretation and Patient Guidance.

Autor: Şahin B; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey., Emre Genç Y; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey., Doğan K; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey., Emre Şener T; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey., Şekerci ÇA; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey., Tanıdır Y; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey., Yücel S; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey., Tarcan T; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey., Çam HK; Department of Urology, Marmara University, School of Medicine, Marmara University, Istanbul, Turkey.
Jazyk: angličtina
Zdroj: Journal of endourology [J Endourol] 2024 Aug; Vol. 38 (8), pp. 799-808. Date of Electronic Publication: 2024 May 30.
DOI: 10.1089/end.2023.0413
Abstrakt: Background/Aim: To evaluate the performance of Chat Generative Pre-trained Transformer (ChatGPT), a large language model trained by Open artificial intelligence. Materials and Methods: This study has three main steps to evaluate the effectiveness of ChatGPT in the urologic field. The first step involved 35 questions from our institution's experts, who have at least 10 years of experience in their fields. The responses of ChatGPT versions were qualitatively compared with the responses of urology residents to the same questions. The second step assesses the reliability of ChatGPT versions in answering current debate topics. The third step was to assess the reliability of ChatGPT versions in providing medical recommendations and directives to patients' commonly asked questions during the outpatient and inpatient clinic. Results: In the first step, version 4 provided correct answers to 25 questions out of 35 while version 3.5 provided only 19 (71.4% vs 54%). It was observed that residents in their last year of education in our clinic also provided a mean of 25 correct answers, and 4th year residents provided a mean of 19.3 correct responses. The second step involved evaluating the response of both versions to debate situations in urology, and it was found that both versions provided variable and inappropriate results. In the last step, both versions had a similar success rate in providing recommendations and guidance to patients based on expert ratings. Conclusion: The difference between the two versions of the 35 questions in the first step of the study was thought to be due to the improvement of ChatGPT's literature and data synthesis abilities. It may be a logical approach to use ChatGPT versions to inform the nonhealth care providers' questions with quick and safe answers but should not be used to as a diagnostic tool or make a choice among different treatment modalities.
Databáze: MEDLINE