Neural Spatio-Temporal Beamformer for Target Speech Separation
Autor: | Shi-Xiong Zhang, Lianwu Chen, Yong Xu, Meng Yu, Jianming Liu, Dong Yu, Chao Weng |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Sound (cs.SD) Artificial neural network Computer science Noise reduction Speech recognition Speech quality Separation (aeronautics) Computer Science - Sound 030507 speech-language pathology & audiology 03 medical and health sciences Minimum-variance unbiased estimator Audio and Speech Processing (eess.AS) FOS: Electrical engineering electronic engineering information engineering 0305 other medical science Joint (audio engineering) Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | INTERSPEECH |
DOI: | 10.21437/interspeech.2020-1458 |
Popis: | Purely neural network (NN) based speech separation and enhancement methods, although can achieve good objective scores, inevitably cause nonlinear speech distortions that are harmful for the automatic speech recognition (ASR). On the other hand, the minimum variance distortionless response (MVDR) beamformer with NN-predicted masks, although can significantly reduce speech distortions, has limited noise reduction capability. In this paper, we propose a multi-tap MVDR beamformer with complex-valued masks for speech separation and enhancement. Compared to the state-of-the-art NN-mask based MVDR beamformer, the multi-tap MVDR beamformer exploits the inter-frame correlation in addition to the inter-microphone correlation that is already utilized in prior arts. Further improvements include the replacement of the real-valued masks with the complex-valued masks and the joint training of the complex-mask NN. The evaluation on our multi-modal multi-channel target speech separation and enhancement platform demonstrates that our proposed multi-tap MVDR beamformer improves both the ASR accuracy and the perceptual speech quality against prior arts. accepted to Interspeech2020, Demo: https://yongxuustc.github.io/mtmvdr/ |
Databáze: | OpenAIRE |
Externí odkaz: |