Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation
Autor: | Benjamin Heinzerling, Michael Strube |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Sequence Computer Science - Computation and Language Computer science Character (computing) business.industry 02 engineering and technology 010501 environmental sciences computer.software_genre 01 natural sciences Named-entity recognition 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Artificial intelligence Representation (mathematics) business computer Computation and Language (cs.CL) Natural language processing 0105 earth and related environmental sciences |
Zdroj: | ACL (1) |
DOI: | 10.48550/arxiv.1906.01569 |
Popis: | Pretrained contextual and non-contextual subword embeddings have become available in over 250 languages, allowing massively multilingual NLP. However, while there is no dearth of pretrained embeddings, the distinct lack of systematic evaluations makes it difficult for practitioners to choose between them. In this work, we conduct an extensive evaluation comparing non-contextual subword embeddings, namely FastText and BPEmb, and a contextual representation method, namely BERT, on multilingual named entity recognition and part-of-speech tagging. We find that overall, a combination of BERT, BPEmb, and character representations works best across languages and tasks. A more detailed analysis reveals different strengths and weaknesses: Multilingual BERT performs well in medium- to high-resource languages, but is outperformed by non-contextual subword embeddings in a low-resource setting. Comment: ACL 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |