Combination of End-to-End and Hybrid Models for Speech Recognition

Autor: Liang Lu, Jinyu Li, Yifan Gong, Eric Sun, Yashesh Gaur, Jeremy H. M. Wong, Rui Zhao
Rok vydání: 2020
Předmět:
Zdroj: INTERSPEECH
DOI: 10.21437/interspeech.2020-2141
Popis: Recent studies suggest that it may now be possible to construct end-to-end Neural Network (NN) models that perform on-par with, or even outperform, hybrid models in speech recognition. These models differ in their designs, and as such, may exhibit diverse and complementary error patterns. A combination between the predictions of these models may therefore yield significant gains. This paper studies the feasibility of performing hypothesis-level combination between hybrid and end-to-end NN models. The end-to-end NN models often exhibit a bias in their posteriors toward short hypotheses, and this may adversely affect Minimum Bayes’ Risk (MBR) combination methods. MBR training and length normalisation can be used to reduce this bias. Models are trained on Microsoft’s 75 thousand hours of anonymised data and evaluated on test sets with 1.8 million words. The results show that significant gains can be obtained by combining the hypotheses of hybrid and end-to-end NN models together.
Databáze: OpenAIRE