Adversarial Attacks on ML Defense Models Competition
Autor: | Dong, Yinpeng, Fu, Qi-An, Yang, Xiao, Xiang, Wenzhao, Pang, Tianyu, Su, Hang, Zhu, Jun, Tang, Jiayu, Chen, Yuefeng, Mao, XiaoFeng, He, Yuan, Xue, Hui, Li, Chao, Liu, Ye, Zhang, Qilong, Gao, Lianli, Yu, Yunrui, Gao, Xitong, Zhao, Zhe, Lin, Daquan, Lin, Jiadong, Song, Chuanbiao, Wang, Zihao, Wu, Zhennan, Guo, Yang, Cui, Jiequan, Xu, Xiaogang, Chen, Pengguang |
---|---|
Rok vydání: | 2021 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years. However, the progress of building more robust models is usually hampered by the incomplete or incorrect robustness evaluation. To accelerate the research on reliable evaluation of adversarial robustness of the current defense models in image classification, the TSAIL group at Tsinghua University and the Alibaba Security group organized this competition along with a CVPR 2021 workshop on adversarial machine learning (https://aisecure-workshop.github.io/amlcvpr2021/). The purpose of this competition is to motivate novel attack algorithms to evaluate adversarial robustness more effectively and reliably. The participants were encouraged to develop stronger white-box attack algorithms to find the worst-case robustness of different defenses. This competition was conducted on an adversarial robustness evaluation platform -- ARES (https://github.com/thu-ml/ares), and is held on the TianChi platform (https://tianchi.aliyun.com/competition/entrance/531847/introduction) as one of the series of AI Security Challengers Program. After the competition, we summarized the results and established a new adversarial robustness benchmark at https://ml.cs.tsinghua.edu.cn/ares-bench/, which allows users to upload adversarial attack algorithms and defense models for evaluation. Comment: Competition Report |
Databáze: | arXiv |
Externí odkaz: |