BiVLC: Extending Vision-Language Compositionality Evaluation with Text-to-Image Retrieval

Autor: Miranda, Imanol, Salaberria, Ander, Agirre, Eneko, Azkune, Gorka
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Existing Vision-Language Compositionality (VLC) benchmarks like SugarCrepe are formulated as image-to-text retrieval problems, where, given an image, the models need to select between the correct textual description and a synthetic hard negative text. In this work, we present the Bidirectional Vision-Language Compositionality (BiVLC) dataset. The novelty of BiVLC is to add a synthetic hard negative image generated from the synthetic text, resulting in two image-to-text retrieval examples (one for each image) and, more importantly, two text-to-image retrieval examples (one for each text). Human annotators filter out ill-formed examples ensuring the validity of the benchmark. The experiments on BiVLC uncover a weakness of current multimodal models, as they perform poorly in the text-to-image direction. In fact, when considering both retrieval directions, the conclusions obtained in previous works change significantly. In addition to the benchmark, we show that a contrastive model trained using synthetic images and texts significantly improves over the base model in SugarCrepe and in BiVLC for both retrieval directions. The gap to human performance in BiVLC confirms that Vision-Language Compositionality is still a challenging problem. BiVLC and code are available at https://imirandam.github.io/BiVLC_project_page.
Comment: Accepted to NeurIPS 24 Datasets and Benchmarks Track; Project page at: https://imirandam.github.io/BiVLC_project_page/
Databáze: arXiv