Can AI Answer My Questions? Utilizing Artificial Intelligence in the Perioperative Assessment for Abdominoplasty Patients.
Autor: | Lim B; Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3199, Australia., Seth I; Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3199, Australia., Cuomo R; Plastic Surgery Unit, Department of Medicine, Surgery and Neuroscience, University of Siena, Siena, Italy. roberto.cuomo@unisi.it., Kenney PS; Department of Plastic Surgery, Velje Hospital, Beriderbakken 4, 7100, Vejle, Denmark.; Department of Plastic and Breast Surgery, Aarhus University Hospital, Aarhus, Denmark., Ross RJ; Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3199, Australia., Sofiadellis F; Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3199, Australia., Pentangelo P; University of Salerno, Fisciano, Italy., Ceccaroni A; University of Salerno, Fisciano, Italy., Alfano C; University of Salerno, Fisciano, Italy., Rozen WM; Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3199, Australia. |
---|---|
Jazyk: | angličtina |
Zdroj: | Aesthetic plastic surgery [Aesthetic Plast Surg] 2024 Jun 19. Date of Electronic Publication: 2024 Jun 19. |
DOI: | 10.1007/s00266-024-04157-0 |
Abstrakt: | Background: Abdominoplasty is a common operation, used for a range of cosmetic and functional issues, often in the context of divarication of recti, significant weight loss, and after pregnancy. Despite this, patient-surgeon communication gaps can hinder informed decision-making. The integration of large language models (LLMs) in healthcare offers potential for enhancing patient information. This study evaluated the feasibility of using LLMs for answering perioperative queries. Methods: This study assessed the efficacy of four leading LLMs-OpenAI's ChatGPT-3.5, Anthropic's Claude, Google's Gemini, and Bing's CoPilot-using fifteen unique prompts. All outputs were evaluated using the Flesch-Kincaid, Flesch Reading Ease score, and Coleman-Liau index for readability assessment. The DISCERN score and a Likert scale were utilized to evaluate quality. Scores were assigned by two plastic surgical residents and then reviewed and discussed until a consensus was reached by five plastic surgeon specialists. Results: ChatGPT-3.5 required the highest level for comprehension, followed by Gemini, Claude, then CoPilot. Claude provided the most appropriate and actionable advice. In terms of patient-friendliness, CoPilot outperformed the rest, enhancing engagement and information comprehensiveness. ChatGPT-3.5 and Gemini offered adequate, though unremarkable, advice, employing more professional language. CoPilot uniquely included visual aids and was the only model to use hyperlinks, although they were not very helpful and acceptable, and it faced limitations in responding to certain queries. Conclusion: ChatGPT-3.5, Gemini, Claude, and Bing's CoPilot showcased differences in readability and reliability. LLMs offer unique advantages for patient care but require careful selection. Future research should integrate LLM strengths and address weaknesses for optimal patient education. Level of Evidence V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 . (© 2024. The Author(s).) |
Databáze: | MEDLINE |
Externí odkaz: |