Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation
Autor: | Ankur Bapna, Yuan Cao, Aditya Siddhant, Mia Xu Chen, Sneha Kudugunta, Orhan Firat, Naveen Arivazhagan, Yonghui Wu |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer Science - Computation and Language Machine translation business.industry Computer science media_common.quotation_subject 02 engineering and technology 010501 environmental sciences computer.software_genre 01 natural sciences Machine Learning (cs.LG) ComputingMethodologies_PATTERNRECOGNITION Self supervision 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Quality (business) Artificial intelligence business Computation and Language (cs.CL) computer Natural language processing 0105 earth and related environmental sciences BLEU media_common |
Zdroj: | ACL |
Popis: | Over the last few years two promising research directions in low-resource neural machine translation (NMT) have emerged. The first focuses on utilizing high-resource languages to improve the quality of low-resource languages via multilingual NMT. The second direction employs monolingual data with self-supervision to pre-train translation models, followed by fine-tuning on small amounts of supervised data. In this work, we join these two lines of research and demonstrate the efficacy of monolingual data with self-supervision in multilingual NMT. We offer three major results: (i) Using monolingual data significantly boosts the translation quality of low-resource languages in multilingual models. (ii) Self-supervision improves zero-shot translation quality in multilingual models. (iii) Leveraging monolingual data with self-supervision provides a viable path towards adding new languages to multilingual models, getting up to 33 BLEU on ro-en translation without any parallel data or back-translation. |
Databáze: | OpenAIRE |
Externí odkaz: |