Are disentangled representations all you need to build speaker anonymization systems?

Autor: Champion, Pierre, Jouvet, Denis, Larcher, Anthony
Rok vydání: 2022
Předmět:
Zdroj: INTERSPEECH 2022 - Human and Humanizing Speech Technology, Sep 2022, incheon, South Korea
Druh dokumentu: Working Paper
Popis: Speech signals contain a lot of sensitive information, such as the speaker's identity, which raises privacy concerns when speech data get collected. Speaker anonymization aims to transform a speech signal to remove the source speaker's identity while leaving the spoken content unchanged. Current methods perform the transformation by relying on content/speaker disentanglement and voice conversion. Usually, an acoustic model from an automatic speech recognition system extracts the content representation while an x-vector system extracts the speaker representation. Prior work has shown that the extracted features are not perfectly disentangled. This paper tackles how to improve features disentanglement, and thus the converted anonymized speech. We propose enhancing the disentanglement by removing speaker information from the acoustic model using vector quantization. Evaluation done using the VoicePrivacy 2022 toolkit showed that vector quantization helps conceal the original speaker identity while maintaining utility for speech recognition.
Databáze: arXiv