Unsupervised Question Answering by Cloze Translation
Autor: | Sebastian Riedel, Ludovic Denoyer, Patrick S. H. Lewis |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer science Computer Science - Artificial Intelligence media_common.quotation_subject Sample (statistics) Context (language use) 02 engineering and technology computer.software_genre Translation (geometry) Machine Learning (cs.LG) 03 medical and health sciences 0302 clinical medicine 0202 electrical engineering electronic engineering information engineering Question answering Natural (music) Quality (business) media_common Training set Computer Science - Computation and Language business.industry Noun phrase Named entity Artificial Intelligence (cs.AI) 030221 ophthalmology & optometry 020201 artificial intelligence & image processing Artificial intelligence business computer Computation and Language (cs.CL) Natural language processing |
Zdroj: | ACL (1) |
Popis: | Obtaining training data for Question Answering (QA) is time-consuming and resource-intensive, and existing QA datasets are only available for limited domains and languages. In this work, we explore to what extent high quality training data is actually required for Extractive QA, and investigate the possibility of unsupervised Extractive QA. We approach this problem by first learning to generate context, question and answer triples in an unsupervised manner, which we then use to synthesize Extractive QA training data automatically. To generate such triples, we first sample random context paragraphs from a large corpus of documents and then random noun phrases or named entity mentions from these paragraphs as answers. Next we convert answers in context to "fill-in-the-blank" cloze questions and finally translate them into natural questions. We propose and compare various unsupervised ways to perform cloze-to-natural question translation, including training an unsupervised NMT model using non-aligned corpora of natural questions and cloze questions as well as a rule-based approach. We find that modern QA models can learn to answer human questions surprisingly well using only synthetic training data. We demonstrate that, without using the SQuAD training data at all, our approach achieves 56.4 F1 on SQuAD v1 (64.5 F1 when the answer is a Named entity mention), outperforming early supervised models. To appear in ACL 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |