Battle of the bots: a comparative analysis of ChatGPT and bing AI for kidney stone-related questions.
Autor: | McMahon AK; Department of Urology, University of Kansas Medical Center, Kansas City, KS, USA., Terry RS; Department of Urology, University of Florida College of Medicine, Gainesville, FL, USA., Ito WE; Department of Urology, University of Kansas Medical Center, Kansas City, KS, USA., Molina WR; Department of Urology, University of Kansas Medical Center, Kansas City, KS, USA., Whiles BB; Department of Urology, University of Kansas Medical Center, Kansas City, KS, USA. bwhiles@kumc.edu. |
---|---|
Jazyk: | angličtina |
Zdroj: | World journal of urology [World J Urol] 2024 Oct 29; Vol. 42 (1), pp. 600. Date of Electronic Publication: 2024 Oct 29. |
DOI: | 10.1007/s00345-024-05326-1 |
Abstrakt: | Objectives: To evaluate and compare the performance of ChatGPT™ (Open AI ® ) and Bing AI™ (Microsoft ® ) for responding to kidney stone treatment-related questions in accordance with the American Urological Association (AUA) guidelines and assess factors such as appropriateness, emphasis on consulting healthcare providers, references, and adherence to guidelines by each chatbot. Methods: We developed 20 kidney stone evaluation and treatment-related questions based on the AUA Surgical Management of Stones guideline. Questions were asked to ChatGPT and Bing AI chatbots. We compared their responses utilizing the brief DISCERN tool as well as response appropriateness. Results: ChatGPT significantly outperformed Bing AI for questions 1-3, which evaluate clarity, achievement, and relevance of responses (12.77 ± 1.71 vs. 10.17 ± 3.27; p < 0.01). In contrast, Bing AI always incorporated references, whereas ChatGPT never did. Consequently, the results for questions 4-6, which evaluated the quality of sources, were consistently favored Bing AI over ChatGPT (10.8 vs. 4.28; p < 0.01). Notably, neither chatbot offered guidance against guidelines for pre-operative testing. However, recommendations against guidelines were notable for specific scenarios: 30.5% for the treatment of adults with ureteral stones, 52.5% for adults with renal stones, and 20.5% for all patient treatment. Conclusions: ChatGPT significantly outperformed Bing AI in terms of providing responses with clear aim, achieving such aim, and relevant and appropriate responses based on AUA surgical stone management guidelines. However, Bing AI provides references, allowing information quality assessment. Additional studies are needed to further evaluate these chatbots and their potential use by clinicians and patients for urologic healthcare-related questions. (© 2024. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.) |
Databáze: | MEDLINE |
Externí odkaz: |