Node Injection for Class-specific Network Poisoning
Autor: | Sharma, Ansh Kumar, Kukreja, Rahul, Kharbanda, Mayank, Chakraborty, Tanmoy |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | Neural Networks 166 (2023) 236-247 |
Druh dokumentu: | Working Paper |
DOI: | 10.1016/j.neunet.2023.07.025 |
Popis: | Graph Neural Networks (GNNs) are powerful in learning rich network representations that aid the performance of downstream tasks. However, recent studies showed that GNNs are vulnerable to adversarial attacks involving node injection and network perturbation. Among these, node injection attacks are more practical as they don't require manipulation in the existing network and can be performed more realistically. In this paper, we propose a novel problem statement - a class-specific poison attack on graphs in which the attacker aims to misclassify specific nodes in the target class into a different class using node injection. Additionally, nodes are injected in such a way that they camouflage as benign nodes. We propose NICKI, a novel attacking strategy that utilizes an optimization-based approach to sabotage the performance of GNN-based node classifiers. NICKI works in two phases - it first learns the node representation and then generates the features and edges of the injected nodes. Extensive experiments and ablation studies on four benchmark networks show that NICKI is consistently better than four baseline attacking strategies for misclassifying nodes in the target class. We also show that the injected nodes are properly camouflaged as benign, thus making the poisoned graph indistinguishable from its clean version w.r.t various topological properties. Comment: 28 pages, 5 figures |
Databáze: | arXiv |
Externí odkaz: |