Zobrazeno 1 - 10
of 14
pro vyhledávání: '"Jiangyu Han"'
Autor:
Jiangyu Han, Xu Hao, Mishal Fatima, Zunera Chauhdary, Ayesha Jamshed, Hafiz Muhammad Abdur Rahman, Rida Siddique, Muhammad Asif, Saba Rana, Liaqat Hussain
Publikováno v:
Dose-Response, Vol 22 (2024)
Introduction: Parkinson’s disease (PD) is characterized by dopamine deficiency in the corpus striatum due to the degeneration of dopaminergic neurons in the substantia nigra. Symptoms include bradykinesia, resting tremors, unstable posture, muscula
Externí odkaz:
https://doaj.org/article/c746d6bcaeb6438f992b5d6515e74288
Autor:
Weijie Huang, Xiaohui Gao, Guanyi Zhao, Yumeng Han, Jiangyu Han, Hao Tang, Zhenyu Wang, Cunbo Li, Yin Tian, Peiyang Li
Publikováno v:
Brain-Apparatus Communication, Vol 2, Iss 1 (2023)
Purpose EEG analysis of emotions is greatly significant for the diagnosis of psychological diseases and brain-computer interface (BCI) applications. However, the applications of EEG brain neural network for emotion classification are rarely reported
Externí odkaz:
https://doaj.org/article/ccd4055b4e2549a390a858917c392d65
Autor:
Jiangyu Han, Yanhua Long
Publikováno v:
EURASIP Journal on Audio, Speech, and Music Processing, Vol 2023, Iss 1, Pp 1-17 (2023)
Abstract Recently, supervised speech separation has made great progress. However, limited by the nature of supervised training, most existing separation methods require ground-truth sources and are trained on synthetic datasets. This ground-truth rel
Externí odkaz:
https://doaj.org/article/8ec6c0c95713493489578d2f43b2a827
Publikováno v:
International Journal of Speech Technology. 25:261-268
PercepNet, a recent extension of the RNNoise, an efficient, high-quality and real-time full-band speech enhancement technique, has shown promising performance in various public deep noise suppression tasks. This paper proposes a new approach, named P
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d3bb0a55e74d17d194b04741f845e4c2
http://arxiv.org/abs/2203.02263
http://arxiv.org/abs/2203.02263
Autor:
Jiangyu Han, Yanhua Long
Publikováno v:
SSRN Electronic Journal.
Recently, supervised speech separation has made great progress. However, limited by the nature of supervised training, most existing separation methods require ground-truth sources and are trained on synthetic datasets. This ground-truth reliance is
In recent years, a number of time-domain speech separation methods have been proposed. However, most of them are very sensitive to the environments and wide domain coverage tasks. In this paper, from the time-frequency domain perspective, we propose
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::790340ef962e02f7c8524d23fe003cf6
http://arxiv.org/abs/2112.13520
http://arxiv.org/abs/2112.13520
Target speech extraction has attracted widespread attention. When microphone arrays are available, the additional spatial information can be helpful in extracting the target speech. We have recently proposed a channel decorrelation (CD) mechanism to
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::1860bf9562bdb4bef49d74461a07e070
Autor:
Wei Rao, Yihui Fu, Yanxin Hu, Xin Xu, Yvkai Jv, Jiangyu Han, Zhongjie Jiang, Lei Xie, Yannan Wang, Shinji Watanabe, Zheng-Hua Tan, Hui Bu, Tao Yu, Shidong Shang
Publikováno v:
Rao, W, Fu, Y, Hu, Y, Xu, X, Jv, Y, Han, J, Jiang, Z, Xie, L, Wang, Y, Watanabe, S, Tan, Z-H, Bu, H, Yu, T & Shang, S 2021, ConferencingSpeech Challenge : Towards Far-field Multi-Channel Speech Enhancement for Video Conferencing . in IEEE Automatic Speech Recognition and Understanding Workshop ., 9688126, IEEE, IEEE Automatic Speech Recognition and Understanding Workshop, 13/12/2021 . https://doi.org/10.1109/ASRU51503.2021.9688126
The ConferencingSpeech 2021 challenge is proposed to stimulate research on far-field multi-channel speech enhancement for video conferencing. The challenge consists of two separate tasks: 1) Task 1 is multi-channel speech enhancement with single micr
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e2d3ccb80030522588703804dab2c229
https://vbn.aau.dk/da/publications/e3d2e75c-f42b-4149-9b1a-20e9b61fe583
https://vbn.aau.dk/da/publications/e3d2e75c-f42b-4149-9b1a-20e9b61fe583
Publikováno v:
ICASSP
The end-to-end approaches for single-channel target speech extraction have attracted widespread attention. However, the studies for end-to-end multi-channel target speech extraction are still relatively limited. In this work, we propose two methods f
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::49cbdd6d5cda7f9eef915853a6deafc3