Toward Domain-Invariant Speech Recognition via Large Scale Training
Autor: | Khe Chai Sim, Mohamed G. Elfeky, Trevor Strohman, Ananya Misra, Michiel Bacchiani, Arun Narayanan, Anshuman Tripathi, Golan Pundak, Parisa Haghani |
---|---|
Rok vydání: | 2018 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Computation and Language Noise measurement Computer science Speech recognition Feature extraction 020206 networking & telecommunications 02 engineering and technology Data modeling Background noise 030507 speech-language pathology & audiology 03 medical and health sciences Sampling (signal processing) Audio and Speech Processing (eess.AS) FOS: Electrical engineering electronic engineering information engineering 0202 electrical engineering electronic engineering information engineering Codec Invariant (mathematics) 0305 other medical science Computation and Language (cs.CL) Utterance Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | SLT |
DOI: | 10.1109/slt.2018.8639610 |
Popis: | Current state-of-the-art automatic speech recognition systems are trained to work in specific `domains', defined based on factors like application, sampling rate and codec. When such recognizers are used in conditions that do not match the training domain, performance significantly drops. This work explores the idea of building a single domain-invariant model for varied use-cases by combining large scale training data from multiple application domains. Our final system is trained using 162,000 hours of speech. Additionally, each utterance is artificially distorted during training to simulate effects like background noise, codec distortion, and sampling rates. Our results show that, even at such a scale, a model thus trained works almost as well as those fine-tuned to specific subsets: A single model can be robust to multiple application domains, and variations like codecs and noise. More importantly, such models generalize better to unseen conditions and allow for rapid adaptation -- we show that by using as little as 10 hours of data from a new domain, an adapted domain-invariant model can match performance of a domain-specific model trained from scratch using 70 times as much data. We also highlight some of the limitations of such models and areas that need addressing in future work. |
Databáze: | OpenAIRE |
Externí odkaz: |