Zobrazeno 1 - 10
of 73
pro vyhledávání: '"Grau, Isel"'
This empirical study proposes a novel methodology to measure users' perceived trust in an Explainable Artificial Intelligence (XAI) model. To do so, users' mental models are elicited using Fuzzy Cognitive Maps (FCMs). First, we exploit an interpretab
Externí odkaz:
http://arxiv.org/abs/2307.11765
In this paper, we integrate the concepts of feature importance with implicit bias in the context of pattern classification. This is done by means of a three-step methodology that involves (i) building a classifier and tuning its hyperparameters, (ii)
Externí odkaz:
http://arxiv.org/abs/2305.09399
In this paper, we tackle the problem of selecting the optimal model for a given structured pattern classification dataset. In this context, a model can be understood as a classifier and a hyperparameter configuration. The proposed meta-learning appro
Externí odkaz:
http://arxiv.org/abs/2210.14687
Publikováno v:
In Knowledge-Based Systems 5 September 2024 299
This paper proposes an algorithm called Forward Composition Propagation (FCP) to explain the predictions of feed-forward neural networks operating on structured classification problems. In the proposed FCP algorithm, each neuron is described by a com
Externí odkaz:
http://arxiv.org/abs/2112.12717
Autor:
Nápoles, Gonzalo, Grau, Isel, Concepción, Leonardo, Koumeri, Lisa Koutsoviti, Papa, João Paulo
This paper presents a Fuzzy Cognitive Map model to quantify implicit bias in structured datasets where features can be numeric or discrete. In our proposal, problem features are mapped to neural concepts that are initially activated by experts when r
Externí odkaz:
http://arxiv.org/abs/2112.12713
Publikováno v:
In Knowledge-Based Systems 8 July 2024 295
Machine learning solutions for pattern classification problems are nowadays widely deployed in society and industry. However, the lack of transparency and accountability of most accurate models often hinders their safe use. Thus, there is a clear nee
Externí odkaz:
http://arxiv.org/abs/2107.03423
In this paper, we present a recurrent neural system named Long Short-term Cognitive Networks (LSTCNs) as a generalization of the Short-term Cognitive Network (STCN) model. Such a generalization is motivated by the difficulty of forecasting very long
Externí odkaz:
http://arxiv.org/abs/2106.16233
An interpretable semi-supervised classifier using two different strategies for amended self-labeling
In the context of some machine learning applications, obtaining data instances is a relatively easy process but labeling them could become quite expensive or tedious. Such scenarios lead to datasets with few labeled instances and a larger number of u
Externí odkaz:
http://arxiv.org/abs/2001.09502