Self-Training and Pre-Training are Complementary for Speech Recognition
Autor: | Michael Auli, Alexis Conneau, Tatiana Likhomanenko, Qiantong Xu, Paden Tomasello, Gabriel Synnaeve, Alexei Baevski, Ronan Collobert |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Sound (cs.SD) Signal processing Computer science Speech recognition education Training (meteorology) Variety (linguistics) Computer Science - Sound Machine Learning (cs.LG) Test (assessment) Audio and Speech Processing (eess.AS) Error analysis FOS: Electrical engineering electronic engineering information engineering Labeled data Self training Word (computer architecture) Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | ICASSP |
Popis: | Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data. However, it is not clear whether they learn similar patterns or if they can be effectively combined. In this paper, we show that pseudo-labeling and pre-training with wav2vec 2.0 are complementary in a variety of labeled data setups. Using just 10 minutes of labeled data from Libri-light as well as 53k hours of unlabeled data from LibriVox achieves word error rates (WER) of 2.8%/4.8% on the clean and other test sets of Librispeech – rivaling the best published systems trained on 960 hours of labeled data only a year ago. Training on all labeled data of Librispeech achieves WERs of 1.5%/3.1%. |
Databáze: | OpenAIRE |
Externí odkaz: |