Multi-Stream End-to-End Speech Recognition
Autor: | Takaaki Hori, Sri Harish Mallidi, Xiaofei Wang, Hynek Hermansky, Ruizhi Li, Shinji Watanabe |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Sound (cs.SD) Acoustics and Ultrasonics Microphone Computer science Speech recognition Word error rate 02 engineering and technology 010501 environmental sciences 01 natural sciences Computer Science - Sound End-to-end principle Connectionism Audio and Speech Processing (eess.AS) Robustness (computer science) FOS: Electrical engineering electronic engineering information engineering 0202 electrical engineering electronic engineering information engineering Computer Science (miscellaneous) Electrical and Electronic Engineering 0105 earth and related environmental sciences Computer Science - Computation and Language 020206 networking & telecommunications Computational Mathematics Test set Computation and Language (cs.CL) Encoder Decoding methods Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | IEEE/ACM Transactions on Audio, Speech, and Language Processing. 28:646-655 |
ISSN: | 2329-9304 2329-9290 |
Popis: | Attention-based methods and Connectionist Temporal Classification (CTC) network have been promising research directions for end-to-end (E2E) Automatic Speech Recognition (ASR). The joint CTC/Attention model has achieved great success by utilizing both architectures during multi-task training and joint decoding. In this work, we present a multi-stream framework based on joint CTC/Attention E2E ASR with parallel streams represented by separate encoders aiming to capture diverse information. On top of the regular attention networks, the Hierarchical Attention Network (HAN) is introduced to steer the decoder toward the most informative encoders. A separate CTC network is assigned to each stream to force monotonic alignments. Two representative framework have been proposed and discussed, which are Multi-Encoder Multi-Resolution (MEM-Res) framework and Multi-Encoder Multi-Array (MEM-Array) framework, respectively. In MEM-Res framework, two heterogeneous encoders with different architectures, temporal resolutions and separate CTC networks work in parallel to extract complimentary information from same acoustics. Experiments are conducted on Wall Street Journal (WSJ) and CHiME-4, resulting in relative Word Error Rate (WER) reduction of 18.0-32.1% and the best WER of 3.6% in the WSJ eval92 test set. The MEM-Array framework aims at improving the far-field ASR robustness using multiple microphone arrays which are activated by separate encoders. Compared with the best single-array results, the proposed framework has achieved relative WER reduction of 3.7% and 9.7% in AMI and DIRHA multi-array corpora, respectively, which also outperforms conventional fusion strategies. submitted to IEEE TASLP (In review). arXiv admin note: substantial text overlap with arXiv:1811.04897, arXiv:1811.04903 |
Databáze: | OpenAIRE |
Externí odkaz: |