Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

Autor: Huang, Shaoyi, Xu, Dongkuan, Yen, Ian E. H., Wang, Yijue, Chang, Sung-en, Li, Bingbing, Chen, Shiyang, Xie, Mimi, Rajasekaran, Sanguthevar, Liu, Hang, Ding, Caiwen
Rok vydání: 2021
Předmět:
Druh dokumentu: Working Paper
Popis: Conventional wisdom in pruning Transformer-based language models is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit. However, under the trending pretrain-and-finetune paradigm, we postulate a counter-traditional hypothesis, that is: pruning increases the risk of overfitting when performed at the fine-tuning phase. In this paper, we aim to address the overfitting problem and improve pruning performance via progressive knowledge distillation with error-bound properties. We show for the first time that reducing the risk of overfitting can help the effectiveness of pruning under the pretrain-and-finetune paradigm. Ablation studies and experiments on the GLUE benchmark show that our method outperforms the leading competitors across different tasks.
Comment: 11 pages; 16 figures; Published in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
Databáze: arXiv