Clinical reasoning using ChatGPT: Is it beyond credibility for physiotherapists use?

Autor: Bilika P; Physiotherapy Department, Faculty of Health Sciences, Clinical Exercise Physiology and Rehabilitation Research Laboratory, University of Thessaly, Lamia, Greece., Stefanouli V; Physiotherapy Department, Faculty of Health Sciences, Health Assessment and Quality of Life Research Laboratory, University of Thessaly, Lamia, Greece., Strimpakos N; Physiotherapy Department, Faculty of Health Sciences, Health Assessment and Quality of Life Research Laboratory, University of Thessaly, Lamia, Greece.; Division of Musculoskeletal and Dermatological Sciences, University of Manchester, Manchester, UK., Kapreli EV; Physiotherapy Department, Faculty of Health Sciences, Clinical Exercise Physiology and Rehabilitation Research Laboratory, University of Thessaly, Lamia, Greece.
Jazyk: angličtina
Zdroj: Physiotherapy theory and practice [Physiother Theory Pract] 2024 Dec; Vol. 40 (12), pp. 2943-2962. Date of Electronic Publication: 2023 Dec 11.
DOI: 10.1080/09593985.2023.2291656
Abstrakt: Background: Artificial Intelligence (AI) tools are gaining popularity in healthcare. OpenAI released ChatGPT on November 30, 2022. ChatGPT is a language model that comprehends and generates human language, providing instant data analysis and recommendations. This is particularly significant in the dynamic field of physiotherapy, where its integration has the potential to enhance healthcare efficiency.
Objectives: This study aims to evaluate whether ChatGPT-3.5 (free version) provides consistent and accurate clinical responses, its ability to imitate human clinical reasoning in simple and complex scenarios, and its capability to produce a differential diagnosis.
Methods: Two studies were conducted using the ChatGPT-3.5. Study 1 evaluated the consistency and accuracy of ChatGPT's responses in clinical assessment using ten user-participants who submitted the phrase "Which are the main steps for a completed physiotherapy assessment?" Study 2 assessed ChatGPT's differential diagnostic ability using published case studies by 2 independent participants. The case reports consisted of one simple and one complex scenario.
Results: Study 1 underscored the variability in ChatGPT's responses, which ranged from comprehensive to concise. Notably, essential steps such as re-assessment and subjective examination were omitted in 30% and 40% of the responses, respectively. In Study 2, ChatGPT demonstrated its capability to develop evidence-based clinical reasoning, particularly evident in simple clinical scenarios. Question phrasing significantly impacted the generated answers.
Conclusions: This study highlights the potential benefits of using ChatGPT in healthcare. It also provides a balanced perspective on ChatGPT's strengths and limitations and emphasizes the importance of using AI tools in a responsible and informed manner.
Databáze: MEDLINE