Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Pizzi, Karla"'
Publikováno v:
Proc. 4th Symposium on Security and Privacy in Speech Communication, 26-32, 2024
In this study, we investigate if noise-augmented training can concurrently improve adversarial robustness in automatic speech recognition (ASR) systems. We conduct a comparative analysis of the adversarial robustness of four different state-of-the-ar
Externí odkaz:
http://arxiv.org/abs/2409.01813
Autor:
Teixeira, Francisco, Pizzi, Karla, Olivier, Raphael, Abad, Alberto, Raj, Bhiksha, Trancoso, Isabel
Membership Inference (MI) poses a substantial privacy threat to the training data of Automatic Speech Recognition (ASR) systems, while also offering an opportunity to audit these models with regard to user data. This paper explores the effectiveness
Externí odkaz:
http://arxiv.org/abs/2405.01207
Most recent speech privacy efforts have focused on anonymizing acoustic speaker attributes but there has not been as much research into protecting information from speech content. We introduce a toy problem that explores an emerging type of privacy c
Externí odkaz:
http://arxiv.org/abs/2401.03936
Audio adversarial examples are audio files that have been manipulated to fool an automatic speech recognition (ASR) system, while still sounding benign to a human listener. Most methods to generate such samples are based on a two-step algorithm: firs
Externí odkaz:
http://arxiv.org/abs/2310.03349
Privacy in speech and audio has many facets. A particularly under-developed area of privacy in this domain involves consideration for information related to content and context. Speech content can include words and their meaning or even stylistic mar
Externí odkaz:
http://arxiv.org/abs/2301.08925
Publikováno v:
Proc. 2nd Symposium on Security and Privacy in Speech Communication, 2022
Model inversion (MI) attacks allow to reconstruct average per-class representations of a machine learning (ML) model's training data. It has been shown that in scenarios where each class corresponds to a different individual, such as face classifiers
Externí odkaz:
http://arxiv.org/abs/2301.03206
The recent emergence of deepfakes has brought manipulated and generated content to the forefront of machine learning research. Automatic detection of deepfakes has seen many new machine learning techniques, however, human detection capabilities are f
Externí odkaz:
http://arxiv.org/abs/2107.09667
This book constitutes the proceedings of the 24th International Conference on Speech and Computer, SPECOM 2022, held as a hybrid event in Gurugram, India, in November 2022.The 51 full and 9 short papers presented in this volume were carefully reviewe