Mitigating Object Hallucination via Data Augmented Contrastive Tuning

Autor: Sarkar, Pritam, Ebrahimi, Sayna, Etemad, Ali, Beirami, Ahmad, Arık, Sercan Ö., Pfister, Tomas
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Despite their remarkable progress, Multimodal Large Language Models (MLLMs) tend to hallucinate factually inaccurate information. In this work, we address object hallucinations in MLLMs, where information is offered about an object that is not present in the model input. We introduce a contrastive tuning method that can be applied to a pretrained off-the-shelf MLLM for mitigating hallucinations while preserving its general vision-language capabilities. For a given factual token, we create a hallucinated token through generative data augmentation by selectively altering the ground-truth information. The proposed contrastive tuning is applied at the token level to improve the relative likelihood of the factual token compared to the hallucinated one. Our thorough evaluation confirms the effectiveness of contrastive tuning in mitigating hallucination. Moreover, the proposed contrastive tuning is simple, fast, and requires minimal training with no additional overhead at inference.
Databáze: arXiv