Multi-Task Learning with Multi-View Attention for Answer Selection and Knowledge Base Question Answering
Autor: | Yang Deng, Yuexiang Xie, Nan Du, Ying Shen, Yaliang Li, Kai Lei, Wei Fan, Min Yang |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Scheme (programming language) Computer Science - Computation and Language Information retrieval Computer science business.industry Multi-task learning 02 engineering and technology General Medicine Ranking (information retrieval) Knowledge base question answering Ranking Knowledge base 020204 information systems 0202 electrical engineering electronic engineering information engineering Selection (linguistics) Question answering 020201 artificial intelligence & image processing business Computation and Language (cs.CL) computer Sentence computer.programming_language |
Zdroj: | AAAI |
ISSN: | 2374-3468 2159-5399 |
DOI: | 10.1609/aaai.v33i01.33016318 |
Popis: | Answer selection and knowledge base question answering (KBQA) are two important tasks of question answering (QA) systems. Existing methods solve these two tasks separately, which requires large number of repetitive work and neglects the rich correlation information between tasks. In this paper, we tackle answer selection and KBQA tasks simultaneously via multi-task learning (MTL), motivated by the following motivations. First, both answer selection and KBQA can be regarded as a ranking problem, with one at text-level while the other at knowledge-level. Second, these two tasks can benefit each other: answer selection can incorporate the external knowledge from knowledge base (KB), while KBQA can be improved by learning contextual information from answer selection. To fulfill the goal of jointly learning these two tasks, we propose a novel multi-task learning scheme that utilizes multi-view attention learned from various perspectives to enable these tasks to interact with each other as well as learn more comprehensive sentence representations. The experiments conducted on several real-world datasets demonstrate the effectiveness of the proposed method, and the performance of answer selection and KBQA is improved. Also, the multi-view attention scheme is proved to be effective in assembling attentive information from different representational perspectives. Accepted by AAAI 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |