Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern Classification
Autor: | Gonzalo Napoles, Yamisleydi Salgueiro, Isel Grau, Maikel Leon Espinosa |
---|---|
Přispěvatelé: | Cognitive Science & AI, Information Systems IE&IS, EAISI Health |
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Neurons Mathematical models Computer Science - Machine Learning Numerical models Data models machine-learning interpretability Computational modeling Computer Science Applications Machine Learning (cs.LG) Human-Computer Interaction Predictive models Cognition long-term cognitive networks (LTCNs) Control and Systems Engineering Models Maps recurrent neural networks Electrical and Electronic Engineering Explainable artificial intelligence Software Information Systems |
Zdroj: | IEEE Transactions on Cybernetics, 1-12. IEEE STARTPAGE=1;ENDPAGE=12;ISSN=2168-2267;TITLE=IEEE Transactions on Cybernetics Pure TUe arXiv, 2021:2107.03423. Cornell University Library |
ISSN: | 2331-8422 2168-2267 |
DOI: | 10.48550/arXiv.2107.03423 |
Popis: | Machine-learning solutions for pattern classification problems are nowadays widely deployed in society and industry. However, the lack of transparency and accountability of most accurate models often hinders their safe use. Thus, there is a clear need for developing explainable artificial intelligence mechanisms. There exist model-agnostic methods that summarize feature contributions, but their interpretability is limited to predictions made by black-box models. An open challenge is to develop models that have intrinsic interpretability and produce their own explanations, even for classes of models that are traditionally considered black boxes like (recurrent) neural networks. In this article, we propose a long-term cognitive network (LTCN) for interpretable pattern classification of structured data. Our method brings its own mechanism for providing explanations by quantifying the relevance of each feature in the decision process. For supporting the interpretability without affecting the performance, the model incorporates more flexibility through a quasi-nonlinear reasoning rule that allows controlling nonlinearity. Besides, we propose a recurrence-aware decision model that evades the issues posed by the unique fixed point while introducing a deterministic learning algorithm to compute the tunable parameters. The simulations show that our interpretable model obtains competitive results when compared to state-of-the-art white and black-box models. |
Databáze: | OpenAIRE |
Externí odkaz: |