Zobrazeno 1 - 10
of 106
pro vyhledávání: '"Qian, Yaguan"'
F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of Natural and Perturbed Patterns
Autor:
Qian, Yaguan, Zhao, Chenyu, Gu, Zhaoquan, Wang, Bin, Ji, Shouling, Wang, Wei, Zhou, Boyang, Zhou, Pan
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by well-designed perturbations. This could lead to disastrous results on critical applications such as self-driving cars, surveillance security, and medical diagnosis. At pres
Externí odkaz:
http://arxiv.org/abs/2310.14561
Deep learning models are widely deployed in many applications, such as object detection in various security fields. However, these models are vulnerable to backdoor attacks. Most backdoor attacks were intensively studied on classified models, but lit
Externí odkaz:
http://arxiv.org/abs/2309.08953
Publikováno v:
International Journal of Applied Mathematics and Computer Science, Vol 34, Iss 3, Pp 425-438 (2024)
Recent studies show that deep neural networks (DNNs) are extremely vulnerable to elaborately designed adversarial examples. Adversarial training, which uses adversarial examples as training data, has been proven to be one of the most effective method
Externí odkaz:
https://doaj.org/article/5b68925ffde547e4b9fa9e9f4cc278f2
Autor:
Liang, Xiaoyu, Qian, Yaguan, Huang, Jianchang, Ling, Xiang, Wang, Bin, Wu, Chunming, Swaileh, Wassim
Adversarial training, as one of the most effective defense methods against adversarial attacks, tends to learn an inclusive decision boundary to increase the robustness of deep learning models. However, due to the large and unnecessary increase in th
Externí odkaz:
http://arxiv.org/abs/2207.07793
Recent studies show deep neural networks (DNNs) are extremely vulnerable to the elaborately designed adversarial examples. Adversarial learning with those adversarial examples has been proved as one of the most effective methods to defend against suc
Externí odkaz:
http://arxiv.org/abs/2207.01396
Autor:
Ling, Xiang, Wu, Lingfei, Zhang, Jiangyu, Qu, Zhenqing, Deng, Wei, Chen, Xiang, Qian, Yaguan, Wu, Chunming, Ji, Shouling, Luo, Tianyue, Wu, Jingzheng, Wu, Yanjun
Malware has been one of the most damaging threats to computers that span across multiple operating systems and various file formats. To defend against ever-increasing and ever-evolving malware, tremendous efforts have been made to propose a variety o
Externí odkaz:
http://arxiv.org/abs/2112.12310
RGB thermal scene parsing has recently attracted increasing research interest in the field of computer vision. However, most existing methods fail to perform good boundary extraction for prediction maps and cannot fully use high level features. In ad
Externí odkaz:
http://arxiv.org/abs/2112.05144
Autor:
Qian, Yaguan, Huang, Wenzhuo, Yu, Qinqin, Yao, Tengteng, Ling, Xiang, Wang, Bin, Gu, Zhaoquan, Zhang, Yanchun
Publikováno v:
In Neurocomputing 28 December 2024 610
Access to historical monuments' floor plans over a time period is necessary to understand the architectural evolution and history. Such knowledge bases also helps to rebuild the history by establishing connection between different event, person and f
Externí odkaz:
http://arxiv.org/abs/2103.08064
Autor:
Qian, Yaguan, Sun, Anlin
Deep learning technology promotes the rapid development of person re-identifica-tion (re-ID). However, some challenges are still existing in the open-world. First, the existing re-ID research usually assumes only one factor variable (view, clothing,
Externí odkaz:
http://arxiv.org/abs/2102.10798