Robustness of Generalized Learning Vector Quantization Models Against Adversarial Attacks
Autor: | Sascha Saralajew, Lars Holdijk, Thomas Villmann, Maike Rees |
---|---|
Rok vydání: | 2019 |
Předmět: |
0209 industrial biotechnology
Learning vector quantization Artificial neural network business.industry Computer science Tangent 02 engineering and technology Machine learning computer.software_genre Adversarial system 020901 industrial engineering & automation Robustness (computer science) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Artificial intelligence business computer |
Zdroj: | Advances in Intelligent Systems and Computing ISBN: 9783030196417 WSOM |
DOI: | 10.1007/978-3-030-19642-4_19 |
Popis: | Adversarial attacks and the development of (deep) neural networks robust against them are currently two widely researched topics. The robustness of Learning Vector Quantization (LVQ) models against adversarial attacks has however not yet been studied to the same extent. We therefore present an extensive evaluation of three LVQ models: Generalized LVQ, Generalized Matrix LVQ and Generalized Tangent LVQ. The evaluation suggests that both Generalized LVQ and Generalized Tangent LVQ have a high base robustness, on par with the current state-of-the-art in robust neural network methods. In contrast to this, Generalized Matrix LVQ shows a high susceptibility to adversarial attacks, scoring consistently behind all other models. Additionally, our numerical evaluation indicates that increasing the number of prototypes per class improves the robustness of the models. |
Databáze: | OpenAIRE |
Externí odkaz: |