Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Wenzhao Xiang"'
Publikováno v:
National Science Review.
This perspective paper proposes a new adversarial training method based on large-scale pre-trained models to achieve state-of-the-art adversarial robustness on ImageNet.
Publikováno v:
Computer Vision and Image Understanding. 229:103647
As designers of artificial intelligence try to outwit hackers, both sides continue to hone in on AI's inherent vulnerabilities. Designed and trained from certain statistical distributions of data, AI's deep neural networks (DNNs) remain vulnerable to
Autor:
Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu, Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang, Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo, Zhen Yang
Publikováno v:
Ying Guo
Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input. However in the real-world, the attacker does not need to comply with this restriction. In fa
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::ffa82f18c225a03857c21ce979519c50
http://arxiv.org/abs/2110.09903
http://arxiv.org/abs/2110.09903