On the Transformer Growth for Progressive BERT Training
Autor: | Chen Chen, Liyuan Liu, Jiawei Han, Jing Li, Xiaotao Gu, Hongkun Yu |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Network architecture Mathematical optimization Computer Science - Computation and Language Computational complexity theory Computer science 02 engineering and technology 010501 environmental sciences 01 natural sciences Machine Learning (cs.LG) Operator (computer programming) Dimension (vector space) 020204 information systems Multiple time dimensions 0202 electrical engineering electronic engineering information engineering Scaling Computation and Language (cs.CL) Selection (genetic algorithm) 0105 earth and related environmental sciences Transformer (machine learning model) |
Zdroj: | NAACL-HLT |
DOI: | 10.48550/arxiv.2010.12562 |
Popis: | Due to the excessive cost of large-scale language model pre-training, considerable efforts have been made to train BERT progressively -- start from an inferior but low-cost model and gradually grow the model to increase the computational complexity. Our objective is to advance the understanding of Transformer growth and discover principles that guide progressive training. First, we find that similar to network architecture search, Transformer growth also favors compound scaling. Specifically, while existing methods only conduct network growth in a single dimension, we observe that it is beneficial to use compound growth operators and balance multiple dimensions (e.g., depth, width, and input length of the model). Moreover, we explore alternative growth operators in each dimension via controlled comparison to give operator selection practical guidance. In light of our analyses, the proposed method speeds up BERT pre-training by 73.6% and 82.2% for the base and large models respectively, while achieving comparable performances Comment: NAACL 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |