Microsoft Bing outperforms five other generative artificial intelligence chatbots in the Antwerp University multiple choice medical license exam.
Autor: | Morreel S; Department of Family Medicine and Population Health, University of Antwerp, Antwerp, Belgium., Verhoeven V; Department of Family Medicine and Population Health, University of Antwerp, Antwerp, Belgium., Mathysen D; Department of Family Medicine and Population Health, University of Antwerp, Antwerp, Belgium.; Dean's Department, University of Antwerp, Antwerp, Belgium. |
---|---|
Jazyk: | angličtina |
Zdroj: | PLOS digital health [PLOS Digit Health] 2024 Feb 14; Vol. 3 (2), pp. e0000349. Date of Electronic Publication: 2024 Feb 14 (Print Publication: 2024). |
DOI: | 10.1371/journal.pdig.0000349 |
Abstrakt: | Recently developed chatbots based on large language models (further called bots) have promising features which could facilitate medical education. Several bots are freely available, but their proficiency has been insufficiently evaluated. In this study the authors have tested the current performance on the multiple-choice medical licensing exam of University of Antwerp (Belgium) of six widely used bots: ChatGPT (OpenAI), Bard (Google), New Bing (Microsoft), Claude instant (Anthropic), Claude+ (Anthropic) and GPT-4 (OpenAI). The primary outcome was the performance on the exam expressed as a proportion of correct answers. Secondary analyses were done for a variety of features in the exam questions: easy versus difficult questions, grammatically positive versus negative questions, and clinical vignettes versus theoretical questions. Reasoning errors and untruthful statements (hallucinations) in the bots' answers were examined. All bots passed the exam; Bing and GPT-4 (both 76% correct answers) outperformed the other bots (62-67%, p = 0.03) and students (61%). Bots performed worse on difficult questions (62%, p = 0.06), but outperformed students (32%) on those questions even more (p<0.01). Hallucinations were found in 7% of Bing's and GPT4's answers, significantly lower than Bard (22%, p<0.01) and Claude Instant (19%, p = 0.02). Although the creators of all bots try to some extent to avoid their bots being used as a medical doctor, none of the tested bots succeeded as none refused to answer all clinical case questions.Bing was able to detect weak or ambiguous exam questions. Bots could be used as a time efficient tool to improve the quality of a multiple-choice exam. Competing Interests: The authors have declared that no competing interests exist. (Copyright: © 2024 Morreel et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.) |
Databáze: | MEDLINE |
Externí odkaz: |