A negative case analysis of visual grounding methods for VQA
Autor: | Robik Shrestha, Kushal Kafle, Christopher Kanan |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Scheme (programming language) Computer Science - Computation and Language Computer Science - Artificial Intelligence Ground business.industry Computer science Computer Vision and Pattern Recognition (cs.CV) 05 social sciences Computer Science - Computer Vision and Pattern Recognition 010501 environmental sciences Machine learning computer.software_genre 01 natural sciences Artificial Intelligence (cs.AI) 0502 economics and business Question answering Artificial intelligence 050207 economics business Computation and Language (cs.CL) computer 0105 earth and related environmental sciences computer.programming_language |
Zdroj: | ACL |
Popis: | Existing Visual Question Answering (VQA) methods tend to exploit dataset biases and spurious statistical correlations, instead of producing right answers for the right reasons. To address this issue, recent bias mitigation methods for VQA propose to incorporate visual cues (e.g., human attention maps) to better ground the VQA models, showcasing impressive gains. However, we show that the performance improvements are not a result of improved visual grounding, but a regularization effect which prevents over-fitting to linguistic priors. For instance, we find that it is not actually necessary to provide proper, human-based cues; random, insensible cues also result in similar improvements. Based on this observation, we propose a simpler regularization scheme that does not require any external annotations and yet achieves near state-of-the-art performance on VQA-CPv2. |
Databáze: | OpenAIRE |
Externí odkaz: |