CEB Improves Model Robustness

Autor: Ian Fischer, Alexander A. Alemi
Jazyk: angličtina
Rok vydání: 2020
Předmět:
Zdroj: Entropy, Vol 22, Iss 10, p 1081 (2020)
Druh dokumentu: article
ISSN: 1099-4300
DOI: 10.3390/e22101081
Popis: Intuitively, one way to make classifiers more robust to their input is to have them depend less sensitively on their input. The Information Bottleneck (IB) tries to learn compressed representations of input that are still predictive. Scaling up IB approaches to large scale image classification tasks has proved difficult. We demonstrate that the Conditional Entropy Bottleneck (CEB) can not only scale up to large scale image classification tasks, but can additionally improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the ImageNet-C Common Corruptions Benchmark, ImageNet-A, and PGD attacks.
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje