Detecting Adversarial Attacks in the Context of Bayesian Networks
Autor: | Marco Valtorta, Emad Alsuwat, John R. Rose, Csilla Farkas, Hatim Alsuwat |
---|---|
Přispěvatelé: | University of South Carolina [Columbia], Simon N. Foley, TC 11, WG 11.3 |
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
Computer science
Context (language use) 02 engineering and technology Adversarial machine learning Machine learning computer.software_genre Bayesian inference The PC algorithm Adversarial system 020204 information systems Detection methods 0202 electrical engineering electronic engineering information engineering Data poisoning attacks [INFO]Computer Science [cs] Layer (object-oriented design) Structure learning Long-duration attacks Structure (mathematical logic) business.industry Bayesian network Bayesian networks 020201 artificial intelligence & image processing Artificial intelligence business computer |
Zdroj: | Lecture Notes in Computer Science 33th IFIP Annual Conference on Data and Applications Security and Privacy (DBSec) 33th IFIP Annual Conference on Data and Applications Security and Privacy (DBSec), Jul 2019, Charleston, SC, United States. pp.3-22, ⟨10.1007/978-3-030-22479-0_1⟩ Data and Applications Security and Privacy XXXIII ISBN: 9783030224783 DBSec |
DOI: | 10.1007/978-3-030-22479-0_1⟩ |
Popis: | Part 1: Attacks; International audience; In this research, we study data poisoning attacks against Bayesian network structure learning algorithms. We propose to use the distance between Bayesian network models and the value of data conflict to detect data poisoning attacks. We propose a 2-layered framework that detects both one-step and long-duration data poisoning attacks. Layer 1 enforces “reject on negative impacts” detection; i.e., input that changes the Bayesian network model is labeled potentially malicious. Layer 2 aims to detect long-duration attacks; i.e., observations in the incoming data that conflict with the original Bayesian model. We show that for a typical small Bayesian network, only a few contaminated cases are needed to corrupt the learned structure. Our detection methods are effective against not only one-step attacks but also sophisticated long-duration attacks. We also present our empirical results. |
Databáze: | OpenAIRE |
Externí odkaz: |