Target Speaker Extraction by Fusing Voiceprint Features

Autor: Shidan Cheng, Ying Shen, Dongqing Wang
Jazyk: angličtina
Rok vydání: 2022
Předmět:
Zdroj: Applied Sciences, Vol 12, Iss 16, p 8152 (2022)
Druh dokumentu: article
ISSN: 2076-3417
DOI: 10.3390/app12168152
Popis: It is a critical problem to accurately separate clean speech in the multispeaker scenario for different speakers. However, in most cases, smart devices such as smart phones interact with only one specific user. As a consequence, the speech separation models adopted by these devices only have to extract the target speaker’s speech. A voiceprint, which reflects the speaker’s voice characteristics, provides prior knowledge for the target speech separation. Therefore, how to efficiently integrate voiceprint features into the existing speech separation models to improve their performance for the target speech separation is an interesting problem not fully explored. This paper attempts to solve this issue to some extent and our contributions are as follows. First, two different voiceprint features (i.e., MFCCs and d-vector) are explored in the performance enhancement for three speech separation models. Second, three different feature fusion methods are proposed to efficiently fuse the voiceprint features with the magnitude spectrograms originally used in the speech separation models. Third, a target speech extraction method which utilizes the fused features is proposed for two speaker-independent models. Experiments demonstrate that the speech separation models integrated with voiceprint features using three feature fusion methods can effectively extract the target speaker’s speech.
Databáze: Directory of Open Access Journals