Application of Knowledge Distillation to Multi-task Speech Representation Learning

Autor: Kerpicci, Mine, Nguyen, Van, Zhang, Shuhua, Visser, Erik
Rok vydání: 2022
Předmět:
Druh dokumentu: Working Paper
Popis: Model architectures such as wav2vec 2.0 and HuBERT have been proposed to learn speech representations from audio waveforms in a self-supervised manner. When they are combined with downstream tasks such as keyword spotting and speaker verification, they provide state-of-the-art performance. However, these models use a large number of parameters, the smallest version of which has 95 million parameters. This constitutes a challenge for edge AI device deployments. In this paper, we investigate the application of knowledge distillation to speech representation learning (SRL) models followed by joint fine-tuning with multiple downstream voice-activated tasks. In our experiments on two such tasks, our approach results in nearly 75% reduction in model size while suffering only 0.1% accuracy and 0.9% equal error rate degradation compared to the full-size model. In addition, we show that fine-tuning the SRL models results in a significant performance boost compared to using frozen SRL models.
Comment: Speech representation learning, multi-task training, wav2vec, HuBERT, knowledge distillation
Databáze: arXiv