A convolutional recurrent neural network with attention framework for speech separation in monaural recordings

Autor: Chao Sun, Min Zhang, Ruijuan Wu, Junhong Lu, Guo Xian, Qin Yu, Xiaofeng Gong, Ruisen Luo
Jazyk: angličtina
Rok vydání: 2021
Předmět:
Zdroj: Scientific Reports, Vol 11, Iss 1, Pp 1-14 (2021)
Druh dokumentu: article
ISSN: 2045-2322
DOI: 10.1038/s41598-020-80713-3
Popis: Abstract Most speech separation studies in monaural channel use only a single type of network, and the separation effect is typically not satisfactory, posing difficulties for high quality speech separation. In this study, we propose a convolutional recurrent neural network with an attention (CRNN-A) framework for speech separation, fusing advantages of two networks together. The proposed separation framework uses a convolutional neural network (CNN) as the front-end of a recurrent neural network (RNN), alleviating the problem that a sole RNN cannot effectively learn the necessary features. This framework makes use of the translation invariance provided by CNN to extract information without modifying the original signals. Within the supplemented CNN, two different convolution kernels are designed to capture information in both the time and frequency domains of the input spectrogram. After concatenating the time-domain and the frequency-domain feature maps, the feature information of speech is exploited through consecutive convolutional layers. Finally, the feature map learned from the front-end CNN is combined with the original spectrogram and is sent to the back-end RNN. Further, the attention mechanism is further incorporated, focusing on the relationship among different feature maps. The effectiveness of the proposed method is evaluated on the standard dataset MIR-1K and the results prove that the proposed method outperforms the baseline RNN and other popular speech separation methods, in terms of GNSDR (gloabl normalised source-to-distortion ratio), GSIR (global source-to-interferences ratio), and GSAR (gloabl source-to-artifacts ratio). In summary, the proposed CRNN-A framework can effectively combine the advantages of CNN and RNN, and further optimise the separation performance via the attention mechanism. The proposed framework can shed a new light on speech separation, speech enhancement, and other related fields.
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje