A noise-robust voice conversion method with controllable background sounds

Autor: Lele Chen, Xiongwei Zhang, Yihao Li, Meng Sun, Weiwei Chen
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: Complex & Intelligent Systems, Vol 10, Iss 3, Pp 3981-3994 (2024)
Druh dokumentu: article
ISSN: 2199-4536
2198-6053
DOI: 10.1007/s40747-024-01375-6
Popis: Abstract Background noises are usually treated as redundant or even harmful to voice conversion. Therefore, when converting noisy speech, a pretrained module of speech separation is usually deployed to estimate clean speech prior to the conversion. However, this can lead to speech distortion due to the mismatch between the separation module and the conversion one. In this paper, a noise-robust voice conversion model is proposed, where a user can choose to retain or to remove the background sounds freely. Firstly, a speech separation module with a dual-decoder structure is proposed, where two decoders decode the denoised speech and the background sounds, respectively. A bridge module is used to capture the interactions between the denoised speech and the background sounds in parallel layers through information exchanging. Subsequently, a voice conversion module with multiple encoders to convert the estimated clean speech from the speech separation model. Finally, the speech separation and voice conversion module are jointly trained using a loss function combining cycle loss and mutual information loss, aiming to improve the decoupling efficacy among speech contents, pitch, and speaker identity. Experimental results show that the proposed model obtains significant improvements in both subjective and objective evaluation metrics compared with the existing baselines. The speech naturalness and speaker similarity of the converted speech are 3.47 and 3.43, respectively.
Databáze: Directory of Open Access Journals