AMBERT: A Pre-trained Language Model with Multi-Grained Tokenization
Autor: | Xinsong Zhang, Hang Li, Pengshuai Li |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Sequence Computer Science - Computation and Language business.industry Computer science Lexical analysis Natural language understanding Inference computer.software_genre Machine Learning (cs.LG) Benchmark (computing) Natural (music) Artificial intelligence Language model business Computation and Language (cs.CL) computer Encoder Natural language processing |
Zdroj: | ACL/IJCNLP (Findings) |
DOI: | 10.18653/v1/2021.findings-acl.37 |
Popis: | Pre-trained language models such as BERT have exhibited remarkable performances in many tasks in natural language understanding (NLU). The tokens in the models are usually fine-grained in the sense that for languages like English they are words or sub-words and for languages like Chinese they are characters. In English, for example, there are multi-word expressions which form natural lexical units and thus the use of coarse-grained tokenization also appears to be reasonable. In fact, both fine-grained and coarse-grained tokenizations have advantages and disadvantages for learning of pre-trained language models. In this paper, we propose a novel pre-trained language model, referred to as AMBERT (A Multi-grained BERT), on the basis of both fine-grained and coarse-grained tokenizations. For English, AMBERT takes both the sequence of words (fine-grained tokens) and the sequence of phrases (coarse-grained tokens) as input after tokenization, employs one encoder for processing the sequence of words and the other encoder for processing the sequence of the phrases, utilizes shared parameters between the two encoders, and finally creates a sequence of contextualized representations of the words and a sequence of contextualized representations of the phrases. Experiments have been conducted on benchmark datasets for Chinese and English, including CLUE, GLUE, SQuAD and RACE. The results show that AMBERT can outperform BERT in all cases, particularly the improvements are significant for Chinese. We also develop a method to improve the efficiency of AMBERT in inference, which still performs better than BERT with the same computational cost as BERT. Comment: To be appeared in Findings of ACL2021. In this version, we develop a simplified method to improve the efficiency of AMBERT in inference, which still performs better than BERT with the same computational cost as BERT |
Databáze: | OpenAIRE |
Externí odkaz: |