ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing
Autor: | Li, Xiaodan, Chen, Yuefeng, Zhu, Yao, Wang, Shuhui, Zhang, Rong, Xue, Hui |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | CVPR 2023 |
Druh dokumentu: | Working Paper |
Popis: | Recent studies have shown that higher accuracy on ImageNet usually leads to better robustness against different corruptions. Therefore, in this paper, instead of following the traditional research paradigm that investigates new out-of-distribution corruptions or perturbations deep models may encounter, we conduct model debugging in in-distribution data to explore which object attributes a model may be sensitive to. To achieve this goal, we create a toolkit for object editing with controls of backgrounds, sizes, positions, and directions, and create a rigorous benchmark named ImageNet-E(diting) for evaluating the image classifier robustness in terms of object attributes. With our ImageNet-E, we evaluate the performance of current deep learning models, including both convolutional neural networks and vision transformers. We find that most models are quite sensitive to attribute changes. A small change in the background can lead to an average of 9.23\% drop on top-1 accuracy. We also evaluate some robust models including both adversarially trained models and other robust trained models and find that some models show worse robustness against attribute changes than vanilla models. Based on these findings, we discover ways to enhance attribute robustness with preprocessing, architecture designs, and training strategies. We hope this work can provide some insights to the community and open up a new avenue for research in robust computer vision. The code and dataset are available at https://github.com/alibaba/easyrobust. Comment: Accepted by CVPR2023 |
Databáze: | arXiv |
Externí odkaz: |