Larger-Scale Transformers for Multilingual Masked Language Modeling
Autor: | Giri Anantharaman, Naman Goyal, Myle Ott, Alexis Conneau, Jingfei Du |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Language understanding Computer Science - Computation and Language Scale (ratio) Computer science business.industry computer.software_genre Code (cryptography) Benchmark (computing) Artificial intelligence Language model business Computation and Language (cs.CL) computer Natural language processing Transformer (machine learning model) |
Zdroj: | Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021). |
Popis: | Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available. 4 pages |
Databáze: | OpenAIRE |
Externí odkaz: |