Is ChatGPT a Reliable Source of Patient Information on Asthma?

Autor: Alabdulmohsen DM; Internal Medicine, College of Medicine, King Faisal University, Hofuf, SAU., Almahmudi MA; Internal Medicine, College of Medicine, King Faisal University, Hofuf, SAU., Alhashim JN; Hematology and Oncology, King Saud Medical City, Riyadh, SAU., Almahdi MH; Internal Medicine, King Faisal General Hospital, Hofuf, SAU., Alkishy EF; Endocrinology, King Saud Medical City, Riyadh, SAU., Almossabeh MJ; Internal Medicine, Abqaiq General Hospital, Abqaiq, SAU., Alkhalifah SA; Internal Medicine, Almoosa Specialist Hospital, Al Mubarraz, SAU.
Jazyk: angličtina
Zdroj: Cureus [Cureus] 2024 Jul 08; Vol. 16 (7), pp. e64114. Date of Electronic Publication: 2024 Jul 08 (Print Publication: 2024).
DOI: 10.7759/cureus.64114
Abstrakt: Introduction: ChatGPT (OpenAI, San Francisco, CA, USA) is a novel artificial intelligence (AI) application that is used by millions of people, and the numbers are growing by the day. Because it has the potential to be a source of patient information, the study aimed to evaluate the ability of ChatGPT to answer frequently asked questions (FAQs) about asthma with consistent reliability, acceptability, and easy readability.
Methods: We collected 30 FAQs about asthma from the Global Initiative for Asthma website. ChatGPT was asked each question twice, by two different users, to assess for consistency. The responses were evaluated by five board-certified internal medicine physicians for reliability and acceptability. The consistency of responses was determined by the differences in evaluation between the two answers to the same question. The readability of all responses was measured using the Flesch Reading Ease Scale (FRES), the Flesch-Kincaid Grade Level (FKGL), and the Simple Measure of Gobbledygook (SMOG).
Results: Sixty responses were collected for evaluation. Fifty-six (93.33%) of the responses were of good reliability. The average rating of the responses was 3.65 out of 4 total points. 78.3% (n=47) of the responses were found acceptable by the evaluators to be the only answer for an asthmatic patient. Only two (6.67%) of the 30 questions had inconsistent answers. The average readability of all responses was determined to be 33.50±14.37 on the FRES, 12.79±2.89 on the FKGL, and 13.47±2.38 on the SMOG.
Conclusion: Compared to online websites, we found that ChatGPT can be a reliable and acceptable source of information for asthma patients in terms of information quality. However, all responses were of difficult readability, and none followed the recommended readability levels. Therefore, the readability of this AI application requires improvement to be more suitable for patients.
Competing Interests: Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
(Copyright © 2024, Alabdulmohsen et al.)
Databáze: MEDLINE