Performance of generative pre-trained transformers (GPTs) in Certification Examination of the College of Family Physicians of Canada.

Autor: Mousavi M; Department of Family Medicine, Faculty of Medicine, University of Saskatchewan, Nipawin, Saskatchewan, Canada., Shafiee S; Department of Family Medicine, Saskatchewan Health Authority, Riverside Health Complex, Turtleford, Saskatchewan, Canada., Harley JM; Department of Surgery, Faculty of Medicine and Health Sciences, McGill University, Montreal, Quebec, Canada.; Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada.; Institute for Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Montreal, Quebec, Canada., Cheung JCK; McGill University School of Computer Science, Montreal, Quebec, Canada.; CIFAR AI Chair, Mila-Quebec AI Institute, Montreal, Quebec, Canada., Abbasgholizadeh Rahimi S; Department of Family Medicine, McGill University, Montreal, Quebec, Canada samira.rahimi@mcgill.ca.; Mila Quebec AI-Institute, Montreal, Quebec, Canada.; Faculty of Dentistry Medicine and Oral Health Sciences, McGill University, Montreal, Quebec, Canada.; Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada.
Jazyk: angličtina
Zdroj: Family medicine and community health [Fam Med Community Health] 2024 May 28; Vol. 12 (Suppl 1). Date of Electronic Publication: 2024 May 28.
DOI: 10.1136/fmch-2023-002626
Abstrakt: Introduction: The application of large language models such as generative pre-trained transformers (GPTs) has been promising in medical education, and its performance has been tested for different medical exams. This study aims to assess the performance of GPTs in responding to a set of sample questions of short-answer management problems (SAMPs) from the certification exam of the College of Family Physicians of Canada (CFPC).
Method: Between August 8th and 25th, 2023, we used GPT-3.5 and GPT-4 in five rounds to answer a sample of 77 SAMPs questions from the CFPC website. Two independent certified family physician reviewers scored AI-generated responses twice: first, according to the CFPC answer key (ie, CFPC score), and second, based on their knowledge and other references (ie, Reviews' score). An ordinal logistic generalised estimating equations (GEE) model was applied to analyse repeated measures across the five rounds.
Result: According to the CFPC answer key, 607 (73.6%) lines of answers by GPT-3.5 and 691 (81%) by GPT-4 were deemed accurate. Reviewer's scoring suggested that about 84% of the lines of answers provided by GPT-3.5 and 93% of GPT-4 were correct. The GEE analysis confirmed that over five rounds, the likelihood of achieving a higher CFPC Score Percentage for GPT-4 was 2.31 times more than GPT-3.5 (OR: 2.31; 95% CI: 1.53 to 3.47; p<0.001). Similarly, the Reviewers' Score percentage for responses provided by GPT-4 over 5 rounds were 2.23 times more likely to exceed those of GPT-3.5 (OR: 2.23; 95% CI: 1.22 to 4.06; p=0.009). Running the GPTs after a one week interval, regeneration of the prompt or using or not using the prompt did not significantly change the CFPC score percentage.
Conclusion: In our study, we used GPT-3.5 and GPT-4 to answer complex, open-ended sample questions of the CFPC exam and showed that more than 70% of the answers were accurate, and GPT-4 outperformed GPT-3.5 in responding to the questions. Large language models such as GPTs seem promising for assisting candidates of the CFPC exam by providing potential answers. However, their use for family medicine education and exam preparation needs further studies.
Competing Interests: Competing interests: None declared.
(© Author(s) (or their employer(s)) 2024. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ.)
Databáze: MEDLINE