Adversarial Attack on Large Scale Graph
Autor: | Tao Xie, Xiangnan He, Fenfang Xie, Zibin Zheng, Jintang Li, Chen Liang |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Theoretical computer science Computer Science - Cryptography and Security Degree (graph theory) Computer science Computer Science - Artificial Intelligence Assortativity Machine Learning (stat.ML) Scale (descriptive set theory) Measure (mathematics) Bottleneck Machine Learning (cs.LG) Computer Science Applications Adversarial system Artificial Intelligence (cs.AI) Computational Theory and Mathematics Robustness (computer science) Statistics - Machine Learning Metric (mathematics) Cryptography and Security (cs.CR) Information Systems |
Popis: | Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness and can therefore be easily fooled. Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance. However, the high complexity of time and space makes them unmanageable for large scale graphs and becomes the major bottleneck that prevents the practical usage. We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows. In this work, we propose an efficient Simplified Gradient-based Attack (SGA) method to bridge this gap. SGA can cause the GNNs to misclassify specific target nodes through a multi-stage attack framework, which needs only a much smaller subgraph. In addition, we present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data. We evaluate our attack method on four real-world graph networks by attacking several commonly used GNNs. The experimental results demonstrate that SGA can achieve significant time and memory efficiency improvements while maintaining competitive attack performance compared to state-of-art attack techniques. Codes are available via: https://github.com/EdisonLeeeee/SGAttack. Accepted by TKDE, the codes are availiable at https://github.com/EdisonLeeeee/SGAttack |
Databáze: | OpenAIRE |
Externí odkaz: |