ChatGPT in Answering Queries Related to Lifestyle-Related Diseases and Disorders.
Autor: | Mondal H; Physiology, All India Institute of Medical Sciences, Deoghar, IND., Dash I; Biochemistry, Saheed Laxman Nayak Medical College and Hospital, Koraput, IND., Mondal S; Physiology, Raiganj Government Medical College and Hospital, Raiganj, IND., Behera JK; Physiology, Nagaland Institute of Medical Science and Research, Kohima, IND. |
---|---|
Jazyk: | angličtina |
Zdroj: | Cureus [Cureus] 2023 Nov 05; Vol. 15 (11), pp. e48296. Date of Electronic Publication: 2023 Nov 05 (Print Publication: 2023). |
DOI: | 10.7759/cureus.48296 |
Abstrakt: | Background Lifestyle-related diseases and disorders have become a significant global health burden. However, the majority of the population ignores or do not consult doctors for such disease or disorders. Artificial intelligence (AI)-based large language model (LLM) like ChatGPT (GPT3.5) is capable of generating customized queries of a user. Hence, it can act as a virtual telehealth agent. Its capability to answer lifestyle-related diseases or disorders has not been explored. Objective This study aimed to evaluate the effectiveness of ChatGPT, an LLM, in providing answers to queries related to lifestyle-related diseases or disorders. Methods A set of 20 lifestyle-related disease or disorder cases covering a wide range of topics such as obesity, diabetes, cardiovascular health, and mental health were prepared with four questions. The case and questions were presented to ChatGPT and asked for the answers to those questions. Two physicians rated the content on a three-point Likert-like scale ranging from accurate (2), partially accurate (1), and inaccurate (0). Further, the content was rated as adequate (2), inadequate (1), and misguiding (0) for testing the applicability of the guides for patients. The readability of the text was analyzed by the Flesch-Kincaid Ease Score (FKES). Results Among 20 cases, the average score of accuracy was 1.83±0.37 and guidance was 1.9±0.21. Both the scores were higher than the hypothetical median of 1.5 (p=0.004 and p<0.0001, respectively). ChatGPT answered the questions with a natural tone in 11 cases and nine with a positive tone. The text was understandable for college graduates with a mean FKES of 27.8±5.74. Conclusion The analysis of content accuracy revealed that ChatGPT provided reasonably accurate information in the majority of the cases, successfully addressing queries related to lifestyle-related diseases or disorders. Hence, initial guidance can be obtained by patients when they get little time to consult a doctor or wait for an appointment to consult a doctor for suggestions about their condition. Competing Interests: The authors have declared that no competing interests exist. (Copyright © 2023, Mondal et al.) |
Databáze: | MEDLINE |
Externí odkaz: |