Scalable Multi Corpora Neural Language Models for ASR
Autor: | Denis Filimonov, Gautam Tiwari, Ariya Rastrow, Anirudh Raju, Guitang Lan |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer Science - Computation and Language Computer science Speech recognition I.2.7 Limiting Machine Learning (cs.LG) Reduction (complexity) Margin (machine learning) Scalability Language model Latency (engineering) Computation and Language (cs.CL) |
Zdroj: | INTERSPEECH |
Popis: | Neural language models (NLM) have been shown to outperform conventional n-gram language models by a substantial margin in Automatic Speech Recognition (ASR) and other tasks. There are, however, a number of challenges that need to be addressed for an NLM to be used in a practical large-scale ASR system. In this paper, we present solutions to some of the challenges, including training NLM from heterogenous corpora, limiting latency impact and handling personalized bias in the second-pass rescorer. Overall, we show that we can achieve a 6.2% relative WER reduction using neural LM in a second-pass n-best rescoring framework with a minimal increase in latency. Interspeech 2019 (accepted: oral) |
Databáze: | OpenAIRE |
Externí odkaz: |