Multimodal feature fusion by relational reasoning and attention for visual question answering

Autor: Hua Hu, Zengchang Qin, Haiyang Hu, Jing Yu, Weifeng Zhang
Rok vydání: 2020
Předmět:
Zdroj: Information Fusion. 55:116-126
ISSN: 1566-2535
DOI: 10.1016/j.inffus.2019.08.009
Popis: The recently emerged research of Visual Question Answering (VQA) has become a hot topic in computer vision. A key solution to VQA exists in how to fuse multimodal features extracted from image and question. In this paper, we show that combining visual relationship and attention together achieves more fine-grained feature fusion. Specifically, we design an effective and efficient module to reason complex relationship between visual objects. In addition, a bilinear attention module is learned for question guided attention on visual objects, which allows us to obtain more discriminative visual features. Given an image and a question in natural language, our VQA model learns visual relational reasoning network and attention network in parallel to fuse fine-grained textual and visual features, so that answers can be predicted accurately. Experimental results show that our approach achieves new state-of-the-art performance of single model on both VQA 1.0 and VQA 2.0 datasets.
Databáze: OpenAIRE