Large Language Model Benchmarks in Medical Tasks

Autor: Yan, Lawrence K. Q., Niu, Qian, Li, Ming, Zhang, Yichao, Yin, Caitlyn Heqi, Fei, Cheng, Peng, Benji, Bi, Ziqian, Feng, Pohsun, Chen, Keyu, Wang, Tianyang, Wang, Yunze, Chen, Silin, Liu, Ming, Liu, Junyu
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: With the increasing application of large language models (LLMs) in the medical domain, evaluating these models' performance using benchmark datasets has become crucial. This paper presents a comprehensive survey of various benchmark datasets employed in medical LLM tasks. These datasets span multiple modalities including text, image, and multimodal benchmarks, focusing on different aspects of medical knowledge such as electronic health records (EHRs), doctor-patient dialogues, medical question-answering, and medical image captioning. The survey categorizes the datasets by modality, discussing their significance, data structure, and impact on the development of LLMs for clinical tasks such as diagnosis, report generation, and predictive decision support. Key benchmarks include MIMIC-III, MIMIC-IV, BioASQ, PubMedQA, and CheXpert, which have facilitated advancements in tasks like medical report generation, clinical summarization, and synthetic data generation. The paper summarizes the challenges and opportunities in leveraging these benchmarks for advancing multimodal medical intelligence, emphasizing the need for datasets with a greater degree of language diversity, structured omics data, and innovative approaches to synthesis. This work also provides a foundation for future research in the application of LLMs in medicine, contributing to the evolving field of medical artificial intelligence.
Comment: 25 pages, 5 tables
Databáze: arXiv