Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded
Autor: | Dhruv Batra, Stefan Lee, Hongxia Jin, Ramprasaath R. Selvaraju, Yilin Shen, Devi Parikh, Larry Heck, Shalini Ghosh |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Closed captioning business.industry Computer science Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition 02 engineering and technology 010501 environmental sciences 01 natural sciences Visualization Task (project management) Human–computer interaction Prior probability 0202 electrical engineering electronic engineering information engineering Question answering Task analysis 020201 artificial intelligence & image processing Language model Artificial intelligence business 0105 earth and related environmental sciences |
Zdroj: | ICCV |
Popis: | Many vision and language models suffer from poor visual grounding - often falling back on easy-to-learn language priors rather than basing their decisions on visual concepts in the image. In this work, we propose a generic approach called Human Importance-aware Network Tuning (HINT) that effectively leverages human demonstrations to improve visual grounding. HINT encourages deep networks to be sensitive to the same input regions as humans. Our approach optimizes the alignment between human attention maps and gradient-based network importances - ensuring that models learn not just to look at but rather rely on visual concepts that humans found relevant for a task when making predictions. We apply HINT to Visual Question Answering and Image Captioning tasks, outperforming top approaches on splits that penalize over-reliance on language priors (VQA-CP and robust captioning) using human attention demonstrations for just 6% of the training data. Comment: Published at ICCV'2019 |
Databáze: | OpenAIRE |
Externí odkaz: |