Efficient Minimum Word Error Rate Training of RNN-Transducer for End-to-End Speech Recognition
Autor: | Che-Wei Huang, Andreas Stolcke, Maarten Van Segbroeck, Gautam Tiwari, Jinxi Guo, Jasha Droppo, Roland Maas |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Sound (cs.SD) Computer Science - Computation and Language Speedup Computer science Computation Speech recognition Work (physics) Word error rate 02 engineering and technology 021001 nanoscience & nanotechnology Computer Science - Sound Machine Learning (cs.LG) Reduction (complexity) Transducer End-to-end principle Audio and Speech Processing (eess.AS) FOS: Electrical engineering electronic engineering information engineering 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing 0210 nano-technology Computation and Language (cs.CL) Decoding methods Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | INTERSPEECH |
DOI: | 10.21437/interspeech.2020-1557 |
Popis: | In this work, we propose a novel and efficient minimum word error rate (MWER) training method for RNN-Transducer (RNN-T). Unlike previous work on this topic, which performs on-the-fly limited-size beam-search decoding and generates alignment scores for expected edit-distance computation, in our proposed method, we re-calculate and sum scores of all the possible alignments for each hypothesis in N-best lists. The hypothesis probability scores and back-propagated gradients are calculated efficiently using the forward-backward algorithm. Moreover, the proposed method allows us to decouple the decoding and training processes, and thus we can perform offline parallel-decoding and MWER training for each subset iteratively. Experimental results show that this proposed semi-on-the-fly method can speed up the on-the-fly method by 6 times and result in a similar WER improvement (3.6%) over a baseline RNN-T model. The proposed MWER training can also effectively reduce high-deletion errors (9.2% WER-reduction) introduced by RNN-T models when EOS is added for endpointer. Further improvement can be achieved if we use a proposed RNN-T rescoring method to re-rank hypotheses and use external RNN-LM to perform additional rescoring. The best system achieves a 5% relative improvement on an English test-set of real far-field recordings and a 11.6% WER reduction on music-domain utterances. Comment: Accepted to Interspeech 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |