Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers

Autor: Morrison, Katelyn, Gilby, Benjamin, Lipchak, Colton, Mattioli, Adam, Kovashka, Adriana
Rok vydání: 2021
Předmět:
Druh dokumentu: Working Paper
Popis: Recently, vision transformers and MLP-based models have been developed in order to address some of the prevalent weaknesses in convolutional neural networks. Due to the novelty of transformers being used in this domain along with the self-attention mechanism, it remains unclear to what degree these architectures are robust to corruptions. Despite some works proposing that data augmentation remains essential for a model to be robust against corruptions, we propose to explore the impact that the architecture has on corruption robustness. We find that vision transformer architectures are inherently more robust to corruptions than the ResNet-50 and MLP-Mixers. We also find that vision transformers with 5 times fewer parameters than a ResNet-50 have more shape bias. Our code is available to reproduce.
Comment: Under review at the Uncertainty and Robustness in Deep Learning workshop at ICML 2021. Our appendix is attached to the last page of the paper
Databáze: arXiv