Popis: |
Over the past decade, Artificial Intelligence (AI) has expanded significantly with increased adoption across various industries, including medicine. Recently, AI’s large language models such as GPT-3, Bard, and GPT-4 have demonstrated remarkable language capabilities. While previous studies have explored their potential in general medical knowledge tasks, here we assess their clinical knowledge and reasoning abilities in a specialized medical context. We study and compare their performances on both the written and oral portions of the comprehensive and challenging American Board of Anesthesiology (ABA) exam, which evaluates candidates’ knowledge and competence in anesthesia practice. In addition, we invited two board examiners to evaluate AI’s answers without disclosing to them the origin of those responses. Our results reveal that only GPT-4 successfully passed the written exam, achieving an accuracy of 78% on the basic section and 80% on the advanced section. In comparison, the less recent or smaller GPT-3 and Bard models scored 58% and 47% on the basic exam, and 50% and 46% on the advanced exam, respectively. Consequently, only GPT-4 was evaluated in the oral exam, with examiners concluding that it had a high likelihood of passing the actual ABA exam. Additionally, we observe that these models exhibit varying degrees of proficiency across distinct topics, which could serve as an indicator of the relative quality of information contained in the corresponding training datasets. This may also act as a predictor for determining which anesthesiology subspecialty is most likely to witness the earliest integration with AI. |