Autor: |
Bharathwajan Rajendran, Chandran G. Vidya, J. Sanil, S. Asharaf |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Human-Centric Intelligent Systems, Vol 4, Iss 1, Pp 53-76 (2024) |
Druh dokumentu: |
article |
ISSN: |
2667-1336 |
DOI: |
10.1007/s44230-023-00058-8 |
Popis: |
Abstract Topic modelling is a Natural Language Processing (NLP) technique that has gained popularity in the recent past. It identifies word co-occurrence patterns inside a document corpus to reveal hidden topics. Graph Neural Topic Model (GNTM) is a topic modelling technique that uses Graph Neural Networks (GNNs) to learn document representations effectively. It provides high-precision documents-topics and topics-words probability distributions. Such models find immense application in many sectors, including healthcare, financial services, and safety-critical systems like autonomous cars. This model is not explainable. As a matter of fact, the user cannot comprehend the underlying decision-making process. The paper introduces a technique to explain the documents-topics probability distributions output of GNTM. The explanation is achieved by building a local explainable model such as a probabilistic Naïve Bayes classifier. The experimental results using various benchmark NLP datasets show a fidelity of 88.39% between the predictions of GNTM and the local explainable model. This similarity implies that the proposed technique can effectively explain the documents-topics probability distribution output of GNTM. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|