The Capability of ChatGPT in Predicting and Explaining Common Drug-Drug Interactions.
Autor: | Juhi A; Physiology, All India Institute of Medical Sciences, Deoghar, Deoghar, IND., Pipil N; Pharmacology, All India Institute of Medical Sciences, Bilaspur, Bilaspur, IND., Santra S; Pharmacology, College of Medicine and JNM Hospital, Kalyani, IND., Mondal S; Physiology, Raiganj Government Medical College and Hospital, Raiganj, IND., Behera JK; Physiology, Dharanidhar Medical College, Keonjhar, Keonjhar, IND., Mondal H; Physiology, All India Institute of Medical Sciences, Deoghar, Deoghar, IND. |
---|---|
Jazyk: | angličtina |
Zdroj: | Cureus [Cureus] 2023 Mar 17; Vol. 15 (3), pp. e36272. Date of Electronic Publication: 2023 Mar 17 (Print Publication: 2023). |
DOI: | 10.7759/cureus.36272 |
Abstrakt: | Background Drug-drug interactions (DDIs) can have serious consequences for patient health and well-being. Patients who are taking multiple medications may be at an increased risk of experiencing adverse events or drug toxicity if they are not aware of potential interactions between their medications. Many times, patients self-prescribe medications without knowing DDI. Objective The objective is to investigate the effectiveness of ChatGPT, a large language model, in predicting and explaining common DDIs. Methods A total of 40 DDIs lists were prepared from previously published literature. This list was used to converse with ChatGPT with a two-stage question. The first question was asked as "can I take X and Y together?" with two drug names. After storing the output, the next question was asked. The second question was asked as "why should I not take X and Y together?" The output was stored for further analysis. The responses were checked by two pharmacologists and the consensus output was categorized as "correct" and "incorrect." The "correct" ones were further classified as "conclusive" and "inconclusive." The text was checked for reading ease scores and grades of education required to understand the text. Data were tested by descriptive and inferential statistics. Results Among the 40 DDI pairs, one answer was incorrect in the first question. Among correct answers, 19 were conclusive and 20 were inconclusive. For the second question, one answer was wrong. Among correct answers, 17 were conclusive and 22 were inconclusive. The mean Flesch reading ease score was 27.64±10.85 in answers to the first question and 29.35±10.16 in answers to the second question, p = 0.47. The mean Flesh-Kincaid grade level was 15.06±2.79 in answers to the first question and 14.85±1.97 in answers to the second question, p = 0.69. When we compared the reading levels with hypothetical 6th grade, the grades were significantly higher than expected (t = 20.57, p < 0.0001 for first answers and t = 28.43, p < 0.0001 for second answers). Conclusion ChatGPT is a partially effective tool for predicting and explaining DDIs. Patients, who may not have immediate access to the healthcare facility for getting information about DDIs, may take help from ChatGPT. However, on several occasions, it may provide incomplete guidance. Further improvement is required for potential usage by patients for getting ideas about DDI. Competing Interests: The authors have declared that no competing interests exist. (Copyright © 2023, Juhi et al.) |
Databáze: | MEDLINE |
Externí odkaz: |