Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling
Autor: | Matthew Wiesner, Martin Karafiat, Nelson Yalta, Shinji Watanabe, Takaaki Hori, Sri Harish Mallidi, Murali Karthick Baskar, Ruizhi Li, Jaejin Cho |
---|---|
Rok vydání: | 2018 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Sound (cs.SD) Computer science Speech recognition 02 engineering and technology Lexicon Computer Science - Sound Data modeling Machine Learning (cs.LG) 030507 speech-language pathology & audiology 03 medical and health sciences Audio and Speech Processing (eess.AS) 0202 electrical engineering electronic engineering information engineering FOS: Electrical engineering electronic engineering information engineering Sequence Computer Science - Computation and Language 020206 networking & telecommunications Convolution (computer science) Recurrent neural network Language model 0305 other medical science Transfer of learning Computation and Language (cs.CL) Decoding methods Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | SLT |
DOI: | 10.48550/arxiv.1810.03459 |
Popis: | Sequence-to-sequence (seq2seq) approach for low-resource ASR is a relatively new direction in speech research. The approach benefits by performing model training without using lexicon and alignments. However, this poses a new problem of requiring more data compared to conventional DNN-HMM systems. In this work, we attempt to use data from 10 BABEL languages to build a multi-lingual seq2seq model as a prior model, and then port them towards 4 other BABEL languages using transfer learning approach. We also explore different architectures for improving the prior multilingual seq2seq model. The paper also discusses the effect of integrating a recurrent neural network language model (RNNLM) with a seq2seq model during decoding. Experimental results show that the transfer learning approach from the multilingual model shows substantial gains over monolingual models across all 4 BABEL languages. Incorporating an RNNLM also brings significant improvements in terms of %WER, and achieves recognition performance comparable to the models trained with twice more training data. |
Databáze: | OpenAIRE |
Externí odkaz: |