Zobrazeno 1 - 10
of 26
pro vyhledávání: '"Lianwu Chen"'
Publikováno v:
Interspeech 2022.
Autor:
Haoran Zhao, Nan Li, Runqiang Han, Lianwu Chen, Xiguang Zheng, Chen Zhang, Liang Guo, Bing Yu
Publikováno v:
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Autor:
Lianwu Chen, Chenglin Xu, Xu Zhang, Xinlei Ren, Xiguang Zheng, Chen Zhang, Liang Guo, Bing Yu
Publikováno v:
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Autor:
Xinlei Ren, Zhang Chen, Chenglin Xu, Zheng Xiguang, Liang Guo, Bing Yu, Lianwu Chen, Zhang Xu
Publikováno v:
MLSP
Multi-channel speech enhancement is gaining increasing interest in recent years. By combining the beamforming framework with the deep neural network, significant improvement on speech enhancement performance has been achieved. While the neural beamfo
Publikováno v:
Interspeech 2021.
Publikováno v:
Interspeech 2021.
Autor:
Bo Wu, Lianwu Chen, Shi-Xiong Zhang, Jianwei Yu, Dan Su, Yong Xu, Helin Wang, Dong Yu, Chao Weng, Meng Yu
In this paper, we exploit the effective way to leverage contextual information to improve the speech dereverberation performance in real-world reverberant environments. We propose a temporal-contextual attention approach on the deep neural network (D
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::483af055ba3709afae698b05f5a04f7a
http://arxiv.org/abs/2103.16849
http://arxiv.org/abs/2103.16849
Publikováno v:
SLT
This paper proposes a new joint optimization framework for simultaneous dereverberation, acoustic echo cancellation, and denoising, which is motivated by the recently proposed con-volutional beamformer for simultaneous denoising and dereverberation.
Autor:
Helen Meng, Dong Yu, Rongzhi Gu, Xunying Liu, Meng Yu, Bo Wu, Lianwu Chen, Yong Xu, Shi-Xiong Zhang, Jianwei Yu, Dan Su
Publikováno v:
INTERSPEECH
Automatic speech recognition (ASR) of overlapped speech remains a highly challenging task to date. To this end, multi-channel microphone array data are widely used in state-of-the-art ASR systems. Motivated by the invariance of visual modality to aco
Publikováno v:
INTERSPEECH
Purely neural network (NN) based speech separation and enhancement methods, although can achieve good objective scores, inevitably cause nonlinear speech distortions that are harmful for the automatic speech recognition (ASR). On the other hand, the