Is Artificial Intelligence a Useful Tool for Clinical Practice of Oral and Maxillofacial Surgery?

Autor: Işik G; Department of Oral and Maxillofacial Surgery, School of Dentistry, Ege University., Kafadar-Gürbüz İA; Department of Oral and Maxillofacial Surgery, School of Dentistry, Ege University., Elgün F; Department of Oral and Maxillofacial Surgery, School of Dentistry, Ege University., Kara RU; School of Dentistry, Ege University., Berber B; School of Dentistry, Ege University., Özgül S; Department of Biostatistic and Medical Informatics, School of Medicine, Ege University, Izmir, Turkey., Günbay T; Department of Oral and Maxillofacial Surgery, School of Dentistry, Ege University.
Jazyk: angličtina
Zdroj: The Journal of craniofacial surgery [J Craniofac Surg] 2024 Oct 01. Date of Electronic Publication: 2024 Oct 01.
DOI: 10.1097/SCS.0000000000010686
Abstrakt: This study aimed to assess the usefulness of ChatGPT Plus generated responses to clinical-specific questions in oral and maxillofacial surgery. This cross-sectional study was conducted with questions composed according to the Clinical Practise Guide of Ege University, School of Dentistry, and with different subjects of oral and maxillofacial surgery at the undergraduate level. These questions were classified according to their difficulty level (easy, medium, and hard) and inputted into ChatGPT Plus. Three researchers evaluated the responses using a 7-point Likert-type accuracy scale and a modified global quality scale (range: 1-5). Also, error analysis was carried out for the questions scored ≤4 according to the accuracy assessment. A total of 66 questions were enrolled in this study. The questions included dental anesthesia, tooth extraction, preoperative and postoperative complications, suturing, writing prescriptions, and temporomandibular joint examination. The median accuracy score of ChatGPT Plus responses was 5, with 75% of the responses scoring 4 or above. The median quality score was 4, with 75% of the responses scoring 3 or above. There was a significant difference among the 3 difficulty levels, both in accuracy and quality scores (P<0.001 and 0.001, respectively). The median scores of hard-level questions were found to be lower than the easy-level and medium-level questions. The study outcomes emphasized high accuracy and quality in ChatGPT Plus's responses, except for the questions requiring a detailed response or a comment.
Competing Interests: The authors report no conflicts of interest.
(Copyright © 2024 by Mutaz B. Habal, MD.)
Databáze: MEDLINE