OpenMedLM: prompt engineering can out-perform fine-tuning in medical question-answering with open-source large language models.
Autor: | Maharjan J; Montera, Inc. Dba Forta, 548 Market St., PMB 89605, San Francisco, CA, 94104-5401, USA., Garikipati A; Montera, Inc. Dba Forta, 548 Market St., PMB 89605, San Francisco, CA, 94104-5401, USA., Singh NP; Montera, Inc. Dba Forta, 548 Market St., PMB 89605, San Francisco, CA, 94104-5401, USA., Cyrus L; Montera, Inc. Dba Forta, 548 Market St., PMB 89605, San Francisco, CA, 94104-5401, USA., Sharma M; Montera, Inc. Dba Forta, 548 Market St., PMB 89605, San Francisco, CA, 94104-5401, USA., Ciobanu M; Montera, Inc. Dba Forta, 548 Market St., PMB 89605, San Francisco, CA, 94104-5401, USA., Barnes G; Montera, Inc. Dba Forta, 548 Market St., PMB 89605, San Francisco, CA, 94104-5401, USA., Thapa R; Montera, Inc. Dba Forta, 548 Market St., PMB 89605, San Francisco, CA, 94104-5401, USA., Mao Q; Montera, Inc. Dba Forta, 548 Market St., PMB 89605, San Francisco, CA, 94104-5401, USA. qmao@fortahealth.com., Das R; Montera, Inc. Dba Forta, 548 Market St., PMB 89605, San Francisco, CA, 94104-5401, USA. |
---|---|
Jazyk: | angličtina |
Zdroj: | Scientific reports [Sci Rep] 2024 Jun 19; Vol. 14 (1), pp. 14156. Date of Electronic Publication: 2024 Jun 19. |
DOI: | 10.1038/s41598-024-64827-6 |
Abstrakt: | LLMs can accomplish specialized medical knowledge tasks, however, equitable access is hindered by the extensive fine-tuning, specialized medical data requirement, and limited access to proprietary models. Open-source (OS) medical LLMs show performance improvements and provide the transparency and compliance required in healthcare. We present OpenMedLM, a prompting platform delivering state-of-the-art (SOTA) performance for OS LLMs on medical benchmarks. We evaluated OS foundation LLMs (7B-70B) on medical benchmarks (MedQA, MedMCQA, PubMedQA, MMLU medical-subset) and selected Yi34B for developing OpenMedLM. Prompting strategies included zero-shot, few-shot, chain-of-thought, and ensemble/self-consistency voting. OpenMedLM delivered OS SOTA results on three medical LLM benchmarks, surpassing previous best-performing OS models that leveraged costly and extensive fine-tuning. OpenMedLM displays the first results to date demonstrating the ability of OS foundation models to optimize performance, absent specialized fine-tuning. The model achieved 72.6% accuracy on MedQA, outperforming the previous SOTA by 2.4%, and 81.7% accuracy on MMLU medical-subset, establishing itself as the first OS LLM to surpass 80% accuracy on this benchmark. Our results highlight medical-specific emergent properties in OS LLMs not documented elsewhere to date and validate the ability of OS models to accomplish healthcare tasks, highlighting the benefits of prompt engineering to improve performance of accessible LLMs for medical applications. (© 2024. The Author(s).) |
Databáze: | MEDLINE |
Externí odkaz: | |
Nepřihlášeným uživatelům se plný text nezobrazuje | K zobrazení výsledku je třeba se přihlásit. |