Embodied Multimodal Multitask Learning
Autor: | Lisa Lee, Devendra Singh Chaplot, Devi Parikh, Dhruv Batra, Ruslan Salakhutdinov |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer Science - Computation and Language business.industry Computer science Computer Science - Artificial Intelligence Multi-task learning Machine Learning (stat.ML) Modular design Object (computer science) Machine Learning (cs.LG) Computer Science - Robotics Artificial Intelligence (cs.AI) Human–computer interaction Embodied cognition Statistics - Machine Learning Visual Objects Question answering Reinforcement learning business Computation and Language (cs.CL) Robotics (cs.RO) Knowledge transfer computer computer.programming_language |
Zdroj: | IJCAI |
Popis: | Recent efforts on training visual navigation agents conditioned on language using deep reinforcement learning have been successful in learning policies for different multimodal tasks, such as semantic goal navigation and embodied question answering. In this paper, we propose a multitask model capable of jointly learning these multimodal tasks, and transferring knowledge of words and their grounding in visual objects across the tasks. The proposed model uses a novel Dual-Attention unit to disentangle the knowledge of words in the textual representations and visual concepts in the visual representations, and align them with each other. This disentangled task-invariant alignment of representations facilitates grounding and knowledge transfer across both tasks. We show that the proposed model outperforms a range of baselines on both tasks in simulated 3D environments. We also show that this disentanglement of representations makes our model modular, interpretable, and allows for transfer to instructions containing new words by leveraging object detectors. See https://devendrachaplot.github.io/projects/EMML for demo videos |
Databáze: | OpenAIRE |
Externí odkaz: |