Autor: |
Humar, Pooja, Asaad, Malke, Bengur, Fuat Baris, Nguyen, Vu |
Zdroj: |
Aesthetic Surgery Journal; Dec2023, Vol. 43 Issue 12, pNP1085-NP1089, 5p |
Abstrakt: |
Background ChatGPT is an artificial intelligence language model developed and released by OpenAI (San Francisco, CA) in late 2022. Objectives The aim of this study was to evaluate the performance of ChatGPT on the Plastic Surgery In-Service Examination and to compare it to residents' performance nationally. Methods The Plastic Surgery In-Service Examinations from 2018 to 2022 were used as a question source. For each question, the stem and all multiple-choice options were imported into ChatGPT. The 2022 examination was used to compare the performance of ChatGPT to plastic surgery residents nationally. Results In total, 1129 questions were included in the final analysis and ChatGPT answered 630 (55.8%) of these correctly. ChatGPT scored the highest on the 2021 exam (60.1%) and on the comprehensive section (58.7%). There were no significant differences regarding questions answered correctly among exam years or among the different exam sections. ChatGPT answered 57% of questions correctly on the 2022 exam. When compared to the performance of plastic surgery residents in 2022, ChatGPT would rank in the 49th percentile for first-year integrated plastic surgery residents, 13th percentile for second-year residents, 5th percentile for third- and fourth-year residents, and 0th percentile for fifth- and sixth-year residents. Conclusions ChatGPT performs at the level of a first-year resident on the Plastic Surgery In-Service Examination. However, it performed poorly when compared with residents in more advanced years of training. Although ChatGPT has many undeniable benefits and potential uses in the field of healthcare and medical education, it will require additional research to assess its efficacy. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|