LT-LM: a novel non-autoregressive language model for single-shot lattice rescoring
Autor: | Yuri Y. Khokhlov, Andrei Andrusenko, Maxim Korenevsky, Ivan Medennikov, Mariya Korenevskaya, Aleksandr Laptev, Anton Mitrofanov, Aleksei Romanenko, Ivan Podluzhny, Aleksei Ilin |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer Science - Computation and Language Artificial neural network Computer science Speech recognition Single shot Machine Learning (cs.LG) Constructed language Autoregressive model Audio and Speech Processing (eess.AS) Lattice (order) FOS: Electrical engineering electronic engineering information engineering Feature (machine learning) Language model Computation and Language (cs.CL) Transformer (machine learning model) Electrical Engineering and Systems Science - Audio and Speech Processing |
Popis: | Neural network-based language models are commonly used in rescoring approaches to improve the quality of modern automatic speech recognition (ASR) systems. Most of the existing methods are computationally expensive since they use autoregressive language models. We propose a novel rescoring approach, which processes the entire lattice in a single call to the model. The key feature of our rescoring policy is a novel non-autoregressive Lattice Transformer Language Model (LT-LM). This model takes the whole lattice as an input and predicts a new language score for each arc. Additionally, we propose the artificial lattices generation approach to incorporate a large amount of text data in the LT-LM training process. Our single-shot rescoring performs orders of magnitude faster than other rescoring methods in our experiments. It is more than 300 times faster than pruned RNNLM lattice rescoring and N-best rescoring while slightly inferior in terms of WER. Submitted to InterSpeech 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |