Vision-Dialog Navigation by Exploring Cross-Modal Memory
Autor: | Bingqian Lin, Fengda Zhu, Zhaohuan Zhan, Yi Zhu, Xiaojun Chang, Jianbin Jiao, Xiaodan Liang |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer science Computer Vision and Pattern Recognition (cs.CV) media_common.quotation_subject Computer Science - Computer Vision and Pattern Recognition 02 engineering and technology 010501 environmental sciences 01 natural sciences Visual memory Human–computer interaction 0202 electrical engineering electronic engineering information engineering Conversation Dialog box 0105 earth and related environmental sciences media_common Computer Science - Computation and Language business.industry Collaborative learning Visual language Memory module Task analysis 020201 artificial intelligence & image processing Artificial intelligence business Computation and Language (cs.CL) Natural language |
Zdroj: | CVPR |
DOI: | 10.1109/cvpr42600.2020.01074 |
Popis: | Vision-dialog navigation posed as a new holy-grail task in vision-language disciplinary targets at learning an agent endowed with the capability of constant conversation for help with natural language and navigating according to human responses. Besides the common challenges faced in visual language navigation, vision-dialog navigation also requires to handle well with the language intentions of a series of questions about the temporal context from dialogue history and co-reasoning both dialogs and visual scenes. In this paper, we propose the Cross-modal Memory Network (CMN) for remembering and understanding the rich information relevant to historical navigation actions. Our CMN consists of two memory modules, the language memory module (L-mem) and the visual memory module (V-mem). Specifically, L-mem learns latent relationships between the current language interaction and a dialog history by employing a multi-head attention mechanism. V-mem learns to associate the current visual views and the cross-modal memory about the previous navigation actions. The cross-modal memory is generated via a vision-to-language attention and a language-to-vision attention. Benefiting from the collaborative learning of the L-mem and the V-mem, our CMN is able to explore the memory about the decision making of historical navigation actions which is for the current step. Experiments on the CVDN dataset show that our CMN outperforms the previous state-of-the-art model by a significant margin on both seen and unseen environments. Comment: CVPR2020 |
Databáze: | OpenAIRE |
Externí odkaz: |