Hubert: How Much Can a Bad Teacher Benefit ASR Pre-Training?

Autor: Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Abdelrahman Mohamed, Wei-Ning Hsu, Benjamin Bolte
Rok vydání: 2021
Předmět:
Zdroj: ICASSP
DOI: 10.1109/icassp39728.2021.9414460
Popis: Compared to vision and language applications, self-supervised pre-training approaches for ASR are challenged by three unique problems: (1) There are multiple sound units in each input utterance, (2) With audio-only pre-training, there is no lexicon of sound units, and (3) Sound units have variable lengths with no explicit segmentation. In this paper, we propose the Hidden-Unit BERT (HUBERT) model which utilizes a cheap k-means clustering step to provide aligned target labels for pre-training of a BERT model. A key ingredient of our approach is applying the predictive loss over the masked regions only. This allows the pre-training stage to benefit from the consistency of the unsupervised teacher rather that its intrinsic quality. Starting with a simple k-means teacher of 100 cluster, and using two iterations of clustering, the HUBERT model matches the state-of-the-art wav2vec 2.0 performance on the ultra low-resource Libri-light 10h, 1h, 10min supervised subsets.
Databáze: OpenAIRE