Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Duan, Ranjie"'
Recently, Text-to-Image(T2I) models have achieved remarkable success in image generation and editing, yet these models still have many potential issues, particularly in generating inappropriate or Not-Safe-For-Work(NSFW) content. Strengthening attack
Externí odkaz:
http://arxiv.org/abs/2408.13896
Autor:
Jia, Xiaojun, Chen, Yuefeng, Mao, Xiaofeng, Duan, Ranjie, Gu, Jindong, Zhang, Rong, Xue, Hui, Cao, Xiaochun
Fast Adversarial Training (FAT) not only improves the model robustness but also reduces the training cost of standard adversarial training. However, fast adversarial training often suffers from Catastrophic Overfitting (CO), which results in poor rob
Externí odkaz:
http://arxiv.org/abs/2308.11443
Developing a practically-robust automatic speech recognition (ASR) is challenging since the model should not only maintain the original performance on clean samples, but also achieve consistent efficacy under small volume perturbations and large doma
Externí odkaz:
http://arxiv.org/abs/2307.12498
Autor:
Mao, Xiaofeng, Chen, Yuefeng, Duan, Ranjie, Zhu, Yao, Qi, Gege, Ye, Shaokai, Li, Xiaodan, Zhang, Rong, Xue, Hui
Adversarial Training (AT), which is commonly accepted as one of the most effective approaches defending against adversarial examples, can largely harm the standard performance, thus has limited usefulness on industrial-scale production and applicatio
Externí odkaz:
http://arxiv.org/abs/2209.07735
Human can easily recognize visual objects with lost information: even losing most details with only contour reserved, e.g. cartoon. However, in terms of visual perception of Deep Neural Networks (DNNs), the ability for recognizing abstract objects (v
Externí odkaz:
http://arxiv.org/abs/2108.09034
Autor:
Mao, Xiaofeng, Qi, Gege, Chen, Yuefeng, Li, Xiaodan, Duan, Ranjie, Ye, Shaokai, He, Yuan, Xue, Hui
Recent advances on Vision Transformer (ViT) and its improved variants have shown that self-attention-based networks surpass traditional Convolutional Neural Networks (CNNs) in most vision tasks. However, existing ViTs focus on the standard accuracy a
Externí odkaz:
http://arxiv.org/abs/2105.07926
Though it is well known that the performance of deep neural networks (DNNs) degrades under certain light conditions, there exists no study on the threats of light beams emitted from some physical source as adversarial attacker on DNNs in a real-world
Externí odkaz:
http://arxiv.org/abs/2103.06504
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. Existing works have mostly focused on either digital adversarial examples created via small and imperceptible perturbations, or physical-world adversarial examples create
Externí odkaz:
http://arxiv.org/abs/2003.08757