Zobrazeno 1 - 10
of 117
pro vyhledávání: '"Wang, Derui"'
With just a few speech samples, it is possible to perfectly replicate a speaker's voice in recent years, while malicious voice exploitation (e.g., telecom fraud for illegal financial gain) has brought huge hazards in our daily lives. Therefore, it is
Externí odkaz:
http://arxiv.org/abs/2410.20742
Video classification systems based on Deep Neural Networks (DNNs) have demonstrated excellent performance in accurately verifying video content. However, recent studies have shown that DNNs are highly vulnerable to adversarial examples. Therefore, a
Externí odkaz:
http://arxiv.org/abs/2408.12099
Face recognition pipelines have been widely deployed in various mission-critical systems in trust, equitable and responsible AI applications. However, the emergence of adversarial attacks has threatened the security of the entire recognition pipeline
Externí odkaz:
http://arxiv.org/abs/2407.08514
Model extraction attacks currently pose a non-negligible threat to the security and privacy of deep learning models. By querying the model with a small dataset and usingthe query results as the ground-truth labels, an adversary can steal a piracy mod
Externí odkaz:
http://arxiv.org/abs/2407.01251
This paper addresses a significant gap in Autonomous Cyber Operations (ACO) literature: the absence of effective edge-blocking ACO strategies in dynamic, real-world networks. It specifically targets the cybersecurity vulnerabilities of organizational
Externí odkaz:
http://arxiv.org/abs/2406.19596
Randomized Smoothing (RS) is currently a scalable certified defense method providing robustness certification against adversarial examples. Although significant progress has been achieved in providing defenses against $\ell_p$ adversaries, the intera
Externí odkaz:
http://arxiv.org/abs/2406.02309
In light of the widespread application of Automatic Speech Recognition (ASR) systems, their security concerns have received much more attention than ever before, primarily due to the susceptibility of Deep Neural Networks. Previous studies have illus
Externí odkaz:
http://arxiv.org/abs/2405.09470
The exploitation of publicly accessible data has led to escalating concerns regarding data privacy and intellectual property (IP) breaches in the age of artificial intelligence. To safeguard both data privacy and IP-related domain knowledge, efforts
Externí odkaz:
http://arxiv.org/abs/2405.03316
Previous work has shown that well-crafted adversarial perturbations can threaten the security of video recognition systems. Attackers can invade such models with a low query budget when the perturbations are semantic-invariant, such as StyleFool. Des
Externí odkaz:
http://arxiv.org/abs/2403.11656
Autor:
Ye, Dayong, Zhu, Tianqing, Zhu, Congcong, Wang, Derui, Gao, Kun, Shi, Zewei, Shen, Sheng, Zhou, Wanlei, Xue, Minhui
Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners. However, one important area that has been largely overlooked in the research of unle
Externí odkaz:
http://arxiv.org/abs/2312.15910