Attention-based ASR with Lightweight and Dynamic Convolutions
Autor: | Aswin Shanmugam Subramanian, Shinji Watanabe, Motoi Omachi, Yuya Fujita |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
Computer science
Speech recognition 020206 networking & telecommunications 02 engineering and technology 010501 environmental sciences 01 natural sciences Convolution Recurrent neural network Quadratic equation Connectionism Audio and Speech Processing (eess.AS) 0202 electrical engineering electronic engineering information engineering FOS: Electrical engineering electronic engineering information engineering Hidden Markov model 0105 earth and related environmental sciences Transformer (machine learning model) Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | ICASSP |
Popis: | End-to-end (E2E) automatic speech recognition (ASR) with sequence-to-sequence models has gained attention because of its simple model training compared with conventional hidden Markov model based ASR. Recently, several studies report the state-of-the-art E2E ASR results obtained by Transformer. Compared to recurrent neural network (RNN) based E2E models, training of Transformer is more efficient and also achieves better performance on various tasks. However, self-attention used in Transformer requires computation quadratic in its input length. In this paper, we propose to apply lightweight and dynamic convolution to E2E ASR as an alternative architecture to the self-attention to make the computational order linear. We also propose joint training with connectionist temporal classification, convolution on the frequency axis, and combination with self-attention. With these techniques, the proposed architectures achieve better performance than RNN-based E2E model and performance competitive to state-of-the-art Transformer on various ASR benchmarks including noisy/reverberant tasks. ICASSP 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |