A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes

Autor: Ding, Shaojin, Wang, Weiran, Zhao, Ding, Sainath, Tara N., He, Yanzhang, David, Robert, Botros, Rami, Wang, Xin, Panigrahy, Rina, Liang, Qiao, Hwang, Dongseong, McGraw, Ian, Prabhavalkar, Rohit, Strohman, Trevor
Rok vydání: 2022
Předmět:
Druh dokumentu: Working Paper
Popis: In this paper, we propose a dynamic cascaded encoder Automatic Speech Recognition (ASR) model, which unifies models for different deployment scenarios. Moreover, the model can significantly reduce model size and power consumption without loss of quality. Namely, with the dynamic cascaded encoder model, we explore three techniques to maximally boost the performance of each model size: 1) Use separate decoders for each sub-model while sharing the encoders; 2) Use funnel-pooling to improve the encoder efficiency; 3) Balance the size of causal and non-causal encoders to improve quality and fit deployment constraints. Overall, the proposed large-medium model has 30% smaller size and reduces power consumption by 33%, compared to the baseline cascaded encoder model. The triple-size model that unifies the large, medium, and small models achieves 37% total size reduction with minimal quality loss, while substantially reducing the engineering efforts of having separate models.
Comment: Accepted by INTERSPEECH 2022
Databáze: arXiv