When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models
Autor: | Antonios Anastasopoulos, Djamé Seddah, Benjamin Muller, Benoît Sagot |
---|---|
Přispěvatelé: | Automatic Language Modelling and ANAlysis & Computational Humanities (ALMAnaCH), Inria de Paris, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), George Mason University [Fairfax], ANR-16-CE33-0021,PARSITI,Analyser l'impossible, Traduire l'improbable(2016), ANR-15-CE38-0011,SoSweet,Une sociolinguistique de Twitter : liens sociaux et variations linguistiques(2015), ANR-19-P3IA-0001,PRAIRIE,PaRis Artificial Intelligence Research InstitutE(2019), Sorbonne Université (SU), Estève, Yannick, Jiménez, Tania, Parcollet, Titouan, Zanon Boito, Marcely |
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Translittération Computer science 02 engineering and technology 010501 environmental sciences computer.software_genre 01 natural sciences [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL] Resource (project management) 0202 electrical engineering electronic engineering information engineering Set (psychology) 0105 earth and related environmental sciences Computer Science - Computation and Language business.industry Norm (artificial intelligence) Modèles de langues multilingues neuronaux 020201 artificial intelligence & image processing Artificial intelligence Language model Langues peu dotées Raw data Transfer of learning business Computation and Language (cs.CL) computer Natural language processing |
Zdroj: | NAACL-HLT NAACL-HLT 2021-2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies NAACL-HLT 2021-2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Jun 2021, Mexico City, Mexico Actes de la 29e Conférence sur le Traitement Automatique des Langues Naturelles. Volume 1 : conférence principale TALN 2022-29° conférence sur le Traitement Automatique des Langues Naturelles TALN 2022-29° conférence sur le Traitement Automatique des Langues Naturelles, Jun 2022, Avignon, France. pp.450-451 |
Popis: | Transfer learning based on pretraining language models on a large amount of raw data has become a new norm to reach state-of-the-art performance in NLP. Still, it remains unclear how this approach should be applied for unseen languages that are not covered by any available large-scale multilingual language model and for which only a small amount of raw data is generally available. In this work, by comparing multilingual and monolingual models, we show that such models behave in multiple ways on unseen languages. Some languages greatly benefit from transfer learning and behave similarly to closely related high resource languages whereas others apparently do not. Focusing on the latter, we show that this failure to transfer is largely related to the impact of the script used to write such languages. Transliterating those languages improves very significantly the ability of large-scale multilingual language models on downstream tasks. Accepted at NAACL-HLT 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |