Audio Adversarial Examples: Attacks Using Vocal Masks
Autor: | Tay, Kai Yuan, Ng, Lynnette, Chua, Wei Han, Loke, Lucerne, Ye, Danqi, Chua, Melissa |
---|---|
Rok vydání: | 2021 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | We construct audio adversarial examples on automatic Speech-To-Text systems . Given any audio waveform, we produce an another by overlaying an audio vocal mask generated from the original audio. We apply our audio adversarial attack to five SOTA STT systems: DeepSpeech, Julius, Kaldi, wav2letter@anywhere and CMUSphinx. In addition, we engaged human annotators to transcribe the adversarial audio. Our experiments show that these adversarial examples fool State-Of-The-Art Speech-To-Text systems, yet humans are able to consistently pick out the speech. The feasibility of this attack introduces a new domain to study machine and human perception of speech. Comment: 9 pages, 1 figure, 2 tables. Submitted to COLING2020 |
Databáze: | arXiv |
Externí odkaz: |