Disentangled speaker and nuisance attribute embedding for robust speaker verification
Autor: | Kang, Woo Hyun, Mun, Sung Hwan, Han, Min Hyun, Kim, Nam Soo |
---|---|
Rok vydání: | 2020 |
Předmět: | |
Druh dokumentu: | Working Paper |
DOI: | 10.1109/ACCESS.2020.3012893 |
Popis: | Over the recent years, various deep learning-based embedding methods have been proposed and have shown impressive performance in speaker verification. However, as in most of the classical embedding techniques, the deep learning-based methods are known to suffer from severe performance degradation when dealing with speech samples with different conditions (e.g., recording devices, emotional states). In this paper, we propose a novel fully supervised training method for extracting a speaker embedding vector disentangled from the variability caused by the nuisance attributes. The proposed framework was compared with the conventional deep learning-based embedding methods using the RSR2015 and VoxCeleb1 dataset. Experimental results show that the proposed approach can extract speaker embeddings robust to channel and emotional variability. Comment: Accepted in IEEE Access |
Databáze: | arXiv |
Externí odkaz: |