ERVQA: A Dataset to Benchmark the Readiness of Large Vision Language Models in Hospital Environments

Autor: Ray, Sourjyadip, Gupta, Kushal, Kundu, Soumi, Kasat, Payal Arvind, Aditya, Somak, Goyal, Pawan
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: The global shortage of healthcare workers has demanded the development of smart healthcare assistants, which can help monitor and alert healthcare workers when necessary. We examine the healthcare knowledge of existing Large Vision Language Models (LVLMs) via the Visual Question Answering (VQA) task in hospital settings through expert annotated open-ended questions. We introduce the Emergency Room Visual Question Answering (ERVQA) dataset, consisting of triplets covering diverse emergency room scenarios, a seminal benchmark for LVLMs. By developing a detailed error taxonomy and analyzing answer trends, we reveal the nuanced nature of the task. We benchmark state-of-the-art open-source and closed LVLMs using traditional and adapted VQA metrics: Entailment Score and CLIPScore Confidence. Analyzing errors across models, we infer trends based on properties like decoder type, model size, and in-context examples. Our findings suggest the ERVQA dataset presents a highly complex task, highlighting the need for specialized, domain-specific solutions.
Comment: Accepted at EMNLP 2024
Databáze: arXiv