Improving Neural Network Robustness Through Neighborhood Preserving Layers
Autor: | Christopher Malon, Erik Kruus, Lingzhou Xue, Bingyuan Liu |
---|---|
Rok vydání: | 2021 |
Předmět: |
0303 health sciences
Contextual image classification Artificial neural network Computer science business.industry 010501 environmental sciences 01 natural sciences 03 medical and health sciences Robustness (computer science) Distortion Benchmark (computing) Artificial intelligence business Gradient descent Algorithm MNIST database 030304 developmental biology 0105 earth and related environmental sciences Vulnerability (computing) |
Zdroj: | Pattern Recognition. ICPR International Workshops and Challenges ISBN: 9783030687793 ICPR Workshops (6) |
DOI: | 10.1007/978-3-030-68780-9_17 |
Popis: | One major source of vulnerability of neural nets in classification tasks is from overparameterized fully connected layers near the end of the network. In this paper, we propose a new neighborhood preserving layer which can replace these fully connected layers to improve the network robustness. Networks including these neighborhood preserving layers can be trained efficiently. We theoretically prove that our proposed layers are more robust against distortion because they effectively control the magnitude of gradients. Finally, we empirically show that networks with our proposed layers are more robust against state-of-the-art gradient descent based attacks, such as a PGD attack on the benchmark image classification datasets MNIST and CIFAR10. |
Databáze: | OpenAIRE |
Externí odkaz: |