Uncertainty estimation-based adversarial attacks: a viable approach for graph neural networks

Autor: Ismail Alarab, Simant Prakoonwit
Rok vydání: 2023
Předmět:
Zdroj: Soft Computing. 27:7925-7937
ISSN: 1433-7479
1432-7643
DOI: 10.1007/s00500-023-08031-0
Popis: Uncertainty estimation has received momentous consideration in applied machine learning to capture model uncertainty. For instance, the Monte-Carlo dropout method (MC-dropout), an approximated Bayesian approach, has gained intensive attention in producing model uncertainty due to its simplicity and efficiency. However, MC-dropout has revealed shortcomings in capturing erroneous predictions lying in the overlapping classes. Such predictions underlie noisy data points that can neither be reduced by more training data nor detected by model uncertainty. On the other hand, Monte-Carlo based on adversarial attacks (MC-AA), an outstanding method, performs perturbations on the inputs using the adversarial attack idea to capture model uncertainty. This method admittedly mitigates the shortcomings of the previous methods by capturing wrong labels in overlapping regions. Motivated by this method that was only validated with neural networks, we sought to apply MC-AA on various graph neural network models to obtain uncertainties using two public real-world graph datasets known as Elliptic and GitHub. First, we perform binary node classifications, then we apply MC-AA and other recent uncertainty estimation methods to capture the uncertainty of the models. Uncertainty evaluation metrics are computed to evaluate and compare the performance of the uncertainty of the model. We highlight the efficacy of MC-AA in capturing uncertainties in graph neural networks wherein MC-AA outperforms other given methods.
Databáze: OpenAIRE