Improving Multilingual Models with Language-Clustered Vocabularies
Autor: | Hyung Won Chung, Dan Garrette, Kiat Chuan Tan, Jason Riesa |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Vocabulary Computer Science - Machine Learning Computer science media_common.quotation_subject Inference 02 engineering and technology 010501 environmental sciences computer.software_genre 01 natural sciences Machine Learning (cs.LG) Reduction (complexity) Factor (programming language) 0202 electrical engineering electronic engineering information engineering 0105 earth and related environmental sciences media_common computer.programming_language Computer Science - Computation and Language business.industry Standard methods Key (cryptography) Benchmark (computing) 020201 artificial intelligence & image processing Artificial intelligence business computer Computation and Language (cs.CL) Natural language processing |
Zdroj: | EMNLP (1) |
DOI: | 10.48550/arxiv.2010.12777 |
Popis: | State-of-the-art multilingual models depend on vocabularies that cover all of the languages the model will expect to see at inference time, but the standard methods for generating those vocabularies are not ideal for massively multilingual applications. In this work, we introduce a novel procedure for multilingual vocabulary generation that combines the separately trained vocabularies of several automatically derived language clusters, thus balancing the trade-off between cross-lingual subword sharing and language-specific vocabularies. Our experiments show improvements across languages on key multilingual benchmark tasks TyDi QA (+2.9 F1), XNLI (+2.1\%), and WikiAnn NER (+2.8 F1) and factor of 8 reduction in out-of-vocabulary rate, all without increasing the size of the model or data. Comment: Published in the main conference of EMNLP 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |