ChatGPT is not ready yet for use in providing mental health assessment and interventions.

Autor: Dergaa I; Primary Health Care Corporation (PHCC), Doha, Qatar.; Research Unit Physical Activity, Sport, and Health, UR18JS01, National Observatory of Sport, Tunis, Tunisia.; High Institute of Sport and Physical Education, University of Sfax, Sfax, Tunisia., Fekih-Romdhane F; The Tunisian Center of Early Intervention in Psychosis, Department of Psychiatry 'Ibn Omrane', Razi Hospital, Manouba, Tunisia.; Faculty of Medicine of Tunis, Tunis El Manar University, Tunis, Tunisia., Hallit S; School of Medicine and Medical Sciences, Holy Spirit University of Kaslik, Jounieh, Lebanon.; Psychology Department, College of Humanities, Effat University, Jeddah, Saudi Arabia.; Applied Science Research Center, Applied Science Private University, Amman, Jordan., Loch AA; Laboratorio de Neurociencias (LIM 27), Hospital das Clínicas HCFMUSP, Faculdade de Medicina, Instituto de Psiquiatria, Universidade de Sao Paulo, São Paulo, Brazil.; Instituto Nacional de Biomarcadores em Neuropsiquiatria (INBION), Conselho Nacional de Desenvolvimento Científico e Tecnológico, São Paulo, Brazil., Glenn JM; Neurotrack Technologies, Redwood City, CA, United States., Fessi MS; High Institute of Sport and Physical Education, University of Sfax, Sfax, Tunisia., Ben Aissa M; Department of Human and Social Sciences, Higher Institute of Sport and Physical Education of Kef, University of Jendouba, Jendouba, Tunisia., Souissi N; Research Unit Physical Activity, Sport, and Health, UR18JS01, National Observatory of Sport, Tunis, Tunisia., Guelmami N; Department of Health Sciences (DISSAL), Postgraduate School of Public Health, University of Genoa, Genoa, Italy., Swed S; Faculty of Medicine, Aleppo University, Aleppo, Syria., El Omri A; Surgical Research Section, Department of Surgery, Hamad Medical Corporation, Doha, Qatar., Bragazzi NL; Laboratory for Industrial and Applied Mathematics, Department of Mathematics and Statistics, York University, Toronto, ON, Canada., Ben Saad H; Service of Physiology and Functional Explorations, Farhat HACHED Hospital, University of Sousse, Sousse, Tunisia.; Heart Failure (LR12SP09) Research Laboratory, Farhat HACHED Hospital, University of Sousse, Sousse, Tunisia.
Jazyk: angličtina
Zdroj: Frontiers in psychiatry [Front Psychiatry] 2024 Jan 04; Vol. 14, pp. 1277756. Date of Electronic Publication: 2024 Jan 04 (Print Publication: 2023).
DOI: 10.3389/fpsyt.2023.1277756
Abstrakt: Background: Psychiatry is a specialized field of medicine that focuses on the diagnosis, treatment, and prevention of mental health disorders. With advancements in technology and the rise of artificial intelligence (AI), there has been a growing interest in exploring the potential of AI language models systems, such as Chat Generative Pre-training Transformer (ChatGPT), to assist in the field of psychiatry.
Objective: Our study aimed to evaluates the effectiveness, reliability and safeness of ChatGPT in assisting patients with mental health problems, and to assess its potential as a collaborative tool for mental health professionals through a simulated interaction with three distinct imaginary patients.
Methods: Three imaginary patient scenarios (cases A, B, and C) were created, representing different mental health problems. All three patients present with, and seek to eliminate, the same chief complaint (i.e., difficulty falling asleep and waking up frequently during the night in the last 2°weeks). ChatGPT was engaged as a virtual psychiatric assistant to provide responses and treatment recommendations.
Results: In case A, the recommendations were relatively appropriate (albeit non-specific), and could potentially be beneficial for both users and clinicians. However, as complexity of clinical cases increased (cases B and C), the information and recommendations generated by ChatGPT became inappropriate, even dangerous; and the limitations of the program became more glaring. The main strengths of ChatGPT lie in its ability to provide quick responses to user queries and to simulate empathy. One notable limitation is ChatGPT inability to interact with users to collect further information relevant to the diagnosis and management of a patient's clinical condition. Another serious limitation is ChatGPT inability to use critical thinking and clinical judgment to drive patient's management.
Conclusion: As for July 2023, ChatGPT failed to give the simple medical advice given certain clinical scenarios. This supports that the quality of ChatGPT-generated content is still far from being a guide for users and professionals to provide accurate mental health information. It remains, therefore, premature to conclude on the usefulness and safety of ChatGPT in mental health practice.
Competing Interests: JG was employed by Neurotrack Technologies. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.
(Copyright © 2024 Dergaa, Fekih-Romdhane, Hallit, Loch, Glenn, Fessi, Ben Aissa, Souissi, Guelmami, Swed, El Omri, Bragazzi and Ben Saad.)
Databáze: MEDLINE