On the Generation and Removal of Speaker Adversarial Perturbation for Voice-Privacy Protection
Autor: | Guo, Chenyang, Chen, Liping, Li, Zhuhai, Lee, Kong Aik, Ling, Zhen-Hua, Guo, Wu |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Zdroj: | 2024 IEEE Spoken Language Technology Workshop (SLT), 2024, pp. 1197-1202 |
Druh dokumentu: | Working Paper |
Popis: | Neural networks are commonly known to be vulnerable to adversarial attacks mounted through subtle perturbation on the input data. Recent development in voice-privacy protection has shown the positive use cases of the same technique to conceal speaker's voice attribute with additive perturbation signal generated by an adversarial network. This paper examines the reversibility property where an entity generating the adversarial perturbations is authorized to remove them and restore original speech (e.g., the speaker him/herself). A similar technique could also be used by an investigator to deanonymize a voice-protected speech to restore criminals' identities in security and forensic analysis. In this setting, the perturbation generative module is assumed to be known in the removal process. To this end, a joint training of perturbation generation and removal modules is proposed. Experimental results on the LibriSpeech dataset demonstrated that the subtle perturbations added to the original speech can be predicted from the anonymized speech while achieving the goal of privacy protection. By removing these perturbations from the anonymized sample, the original speech can be restored. Audio samples can be found in \url{https://voiceprivacy.github.io/Perturbation-Generation-Removal/}. Comment: 6 pages, 3 figures, published to IEEE SLT Workshop 2024 |
Databáze: | arXiv |
Externí odkaz: |