Explaining Multi-modal Large Language Models by Analyzing their Vision Perception

Autor: Giulivi, Loris, Boracchi, Giacomo
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Multi-modal Large Language Models (MLLMs) have demonstrated remarkable capabilities in understanding and generating content across various modalities, such as images and text. However, their interpretability remains a challenge, hindering their adoption in critical applications. This research proposes a novel approach to enhance the interpretability of MLLMs by focusing on the image embedding component. We combine an open-world localization model with a MLLM, thus creating a new architecture able to simultaneously produce text and object localization outputs from the same vision embedding. The proposed architecture greatly promotes interpretability, enabling us to design a novel saliency map to explain any output token, to identify model hallucinations, and to assess model biases through semantic adversarial perturbations.
Comment: Submitted at BMVC 2024
Databáze: arXiv