Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture

Autor: Liu, Peiyu, Gao, Ze-Feng, Chen, Yushuo, Zhao, Wayne Xin, Wen, Ji-Rong
Rok vydání: 2023
Předmět:
Druh dokumentu: Working Paper
Popis: In this paper, we propose a highly parameter-efficient approach to scaling pre-trained language models (PLMs) to a deeper model depth. Unlike prior work that shares all parameters or uses extra blocks, we design a more capable parameter-sharing architecture based on matrix product operator (MPO). MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts: the major part that contains the major information (central tensor) and the supplementary part that only has a small proportion of parameters (auxiliary tensors). Based on such a decomposition, our architecture shares the central tensor across all layers for reducing the model size and meanwhile keeps layer-specific auxiliary tensors (also using adapters) for enhancing the adaptation flexibility. To improve the model training, we further propose a stable initialization algorithm tailored for the MPO-based architecture. Extensive experiments have demonstrated the effectiveness of our proposed model in reducing the model size and achieving highly competitive performance.
Comment: 14 pages, 4 figures, 6 tables
Databáze: arXiv