Testing pre-trained Transformer models for Lithuanian news clustering
Autor: | Stankevičius, L., Mantas Lukoševičius |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer Science - Computation and Language document embedding I.2.6 Lithuanian news articles Machine Learning (cs.LG) Computer Science - Information Retrieval 68T05 Transformer model Document clustering multilingual XLM-R Computation and Language (cs.CL) Information Retrieval (cs.IR) BERT |
Zdroj: | Scopus-Elsevier |
Popis: | A recent introduction of Transformer deep learning architecture made breakthroughs in various natural language processing tasks. However, non-English languages could not leverage such new opportunities with the English text pre-trained models. This changed with research focusing on multilingual models, where less-spoken languages are the main beneficiaries. We compare pre-trained multilingual BERT, XLM-R, and older learned text representation methods as encodings for the task of Lithuanian news clustering. Our results indicate that publicly available pre-trained multilingual Transformer models can be fine-tuned to surpass word vectors but still score much lower than specially trained doc2vec embeddings. Submission accepted at https://ivus.ktu.edu/ |
Databáze: | OpenAIRE |
Externí odkaz: |