Towards Adversarially Robust DDoS-Attack Classification
Autor: | Michael Guarino, Casimer M. DeCusatis, Pablo Rivas |
---|---|
Rok vydání: | 2020 |
Předmět: |
Learning classifier system
Artificial neural network Computer science business.industry Denial-of-service attack 02 engineering and technology 010501 environmental sciences Machine learning computer.software_genre 01 natural sciences Class (biology) Adversarial system 0202 electrical engineering electronic engineering information engineering Task analysis 020201 artificial intelligence & image processing State (computer science) Artificial intelligence business computer Classifier (UML) 0105 earth and related environmental sciences |
Zdroj: | UEMCON |
DOI: | 10.1109/uemcon51285.2020.9298167 |
Popis: | On the frontier of cybersecurity are a class of emergent security threats that learn to find vulnerabilities in machine learning systems. A supervised machine learning classifier learns a mapping from x to y where x is the input features and y is a vector of associated labels. Neural Networks are state of the art performers on most vision, audio, and natural language processing tasks. Neural Networks have been shown to be vulnerable to adversarial perturbations of the input, which cause them to misclassify with high confidence. Adversarial perturbations are small but targeted modifications to the input often undetectable by the human eye. Adversarial perturbations pose risk to applications that rely on machine learning models. Neural Networks have been shown to be able to classify distributed denial of service (DDoS) attacks by learning a dataset of attack characteristics visualized using three-axis hive plots. In this work we present a novel application of a classifier trained to classify DDoS attacks that is robust to some of the most common, known, classes of gradient-based and gradient-free adversarial attacks. |
Databáze: | OpenAIRE |
Externí odkaz: |