Semi-Supervised Learning with Data Augmentation for End-to-End ASR
Autor: | Jesús Andrés-Ferrer, Roberto Gemello, Puming Zhan, Franco Mana, Felix Weninger |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Sound (cs.SD) Contextual image classification Computer science business.industry Word error rate Pattern recognition Semi-supervised learning Regularization (mathematics) Computer Science - Sound Machine Learning (cs.LG) Reduction (complexity) Data set Consistency (database systems) ComputingMethodologies_PATTERNRECOGNITION End-to-end principle Audio and Speech Processing (eess.AS) FOS: Electrical engineering electronic engineering information engineering Artificial intelligence business Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | INTERSPEECH |
DOI: | 10.48550/arxiv.2007.13876 |
Popis: | In this paper, we apply Semi-Supervised Learning (SSL) along with Data Augmentation (DA) for improving the accuracy of End-to-End ASR. We focus on the consistency regularization principle, which has been successfully applied to image classification tasks, and present sequence-to-sequence (seq2seq) versions of the FixMatch and Noisy Student algorithms. Specifically, we generate the pseudo labels for the unlabeled data on-the-fly with a seq2seq model after perturbing the input features with DA. We also propose soft label variants of both algorithms to cope with pseudo label errors, showing further performance improvements. We conduct SSL experiments on a conversational speech data set with 1.9kh manually transcribed training data, using only 25% of the original labels (475h labeled data). In the result, the Noisy Student algorithm with soft labels and consistency regularization achieves 10.4% word error rate (WER) reduction when adding 475h of unlabeled data, corresponding to a recovery rate of 92%. Furthermore, when iteratively adding 950h more unlabeled data, our best SSL performance is within 5% WER increase compared to using the full labeled training set (recovery rate: 78%). Comment: To appear in INTERSPEECH 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |