Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care.
Autor: | Hancı V; Clinic of Anesthesiology and Critical Care, Sincan Education and Research Hospital, Ankara, Turkey., Ergün B; Clinic of Internal Medicine and Critical Care, Dr. Ismail Fehmi Cumalioğlu City Hospital, Tekirdağ, Turkey., Gül Ş; Clinic of Neurosurgery, Ankara Ataturk Sanatory Education and Research Hospital, Ankara, Turkey., Uzun Ö; Clinic of Internal Medicine and Nephrology, Yalova City Hospital, Yalova, Turkey., Erdemir İ; Department of Anesthesiology and Critical Care, Faculty of Medicine, Dokuz Eylül University, Izmir, Turkey., Hancı FB; Artificial Intelligence Engineering Department, Faculty of Engineering, Ostim Technical University, Ankara, Turkey. |
---|---|
Jazyk: | angličtina |
Zdroj: | Medicine [Medicine (Baltimore)] 2024 Aug 16; Vol. 103 (33), pp. e39305. |
DOI: | 10.1097/MD.0000000000039305 |
Abstrakt: | There is no study that comprehensively evaluates data on the readability and quality of "palliative care" information provided by artificial intelligence (AI) chatbots ChatGPT®, Bard®, Gemini®, Copilot®, Perplexity®. Our study is an observational and cross-sectional original research study. In our study, AI chatbots ChatGPT®, Bard®, Gemini®, Copilot®, and Perplexity® were asked to present the answers of the 100 questions most frequently asked by patients about palliative care. Responses from each 5 AI chatbots were analyzed separately. This study did not involve any human participants. Study results revealed significant differences between the readability assessments of responses from all 5 AI chatbots (P < .05). According to the results of our study, when different readability indexes were evaluated holistically, the readability of AI chatbot responses was evaluated as Bard®, Copilot®, Perplexity®, ChatGPT®, Gemini®, from easy to difficult (P < .05). In our study, the median readability indexes of each of the 5 AI chatbots Bard®, Copilot®, Perplexity®, ChatGPT®, Gemini® responses were compared to the "recommended" 6th grade reading level. According to the results of our study answers of all 5 AI chatbots were compared with the 6th grade reading level, statistically significant differences were observed in the all formulas (P < .001). The answers of all 5 artificial intelligence robots were determined to be at an educational level well above the 6th grade level. The modified DISCERN and Journal of American Medical Association scores was found to be the highest in Perplexity® (P < .001). Gemini® responses were found to have the highest Global Quality Scale score (P < .001). It is emphasized that patient education materials should have a readability level of 6th grade level. Of the 5 AI chatbots whose answers about palliative care were evaluated, Bard®, Copilot®, Perplexity®, ChatGPT®, Gemini®, their current answers were found to be well above the recommended levels in terms of readability of text content. Text content quality assessment scores are also low. Both the quality and readability of texts should be brought to appropriate recommended limits. Competing Interests: The authors have no conflicts of interest to disclose. (Copyright © 2024 the Author(s). Published by Wolters Kluwer Health, Inc.) |
Databáze: | MEDLINE |
Externí odkaz: |