On GNN explanability with activation patterns

Autor: Veyrin-Forrer, Luca, Kamal, Ataollah, Duffner, Stefan, Plantevit, Marc, Robardet, Céline
Přispěvatelé: Laboratoire d'InfoRmatique en Image et Systèmes d'information (LIRIS), Institut National des Sciences Appliquées de Lyon (INSA Lyon), Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA)-Université de Lyon-Centre National de la Recherche Scientifique (CNRS)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-École Centrale de Lyon (ECL), Université de Lyon-Université Lumière - Lyon 2 (UL2), Robardet, Céline
Jazyk: angličtina
Rok vydání: 2021
Předmět:
Popis: GNNs are powerful models based on node representation learning that perform particularly well in many machine learning problems related to graphs. The major obstacle to the deployment of GNNs is mostly a problem of societal acceptability and trustworthiness, properties which require making explicit the internal functioning of such models. Here, we propose to mine activation patterns in the hidden layers to understand how the GNNs perceive the world. The problem is not to discover activation patterns that are individually highly discriminating for an output of the model. Instead, the challenge is to provide a small set of patterns that cover all input graphs. To this end, we introduce the subjective activation pattern domain. We define an effective and principled algorithm to enumerate patterns of activations in each hidden layer. The proposed approach for quantifying the interest of these patterns is rooted in information theory and is able to account for background knowledge on the input graph data. The activation patterns can then be redescribed thanks to pattern languages involving interpretable features. We show that the activation patterns provide insights on the characteristics used by the GNN to classify the graphs. Especially, this allows to identify the hidden features built by the GNN through its different layers. Also, these patterns can subsequently be used for explaining GNN decisions. Experiments on both synthetic and real-life datasets show highly competitive performance, with up to 200% improvement in fidelity on explaining graph classification over the SOTA methods.
Databáze: OpenAIRE