Zobrazeno 1 - 10
of 81
pro vyhledávání: '"Barbiero, Pietro"'
Clustering algorithms rely on complex optimisation processes that may be difficult to comprehend, especially for individuals who lack technical expertise. While many explainable artificial intelligence techniques exist for supervised machine learning
Externí odkaz:
http://arxiv.org/abs/2409.12632
Autor:
Debot, David, Barbiero, Pietro, Giannini, Francesco, Ciravegna, Gabriele, Diligenti, Michelangelo, Marra, Giuseppe
The lack of transparency in the decision-making processes of deep learning systems presents a significant challenge in modern artificial intelligence (AI), as it impairs users' ability to rely on and verify these systems. To address this challenge, C
Externí odkaz:
http://arxiv.org/abs/2407.15527
Autor:
De Santis, Francesco, Bich, Philippe, Ciravegna, Gabriele, Barbiero, Pietro, Giordano, Danilo, Cerquitelli, Tania
Despite their success, Large-Language Models (LLMs) still face criticism as their lack of interpretability limits their controllability and reliability. Traditional post-hoc interpretation methods, based on attention and gradient-based analysis, offe
Externí odkaz:
http://arxiv.org/abs/2406.14335
Autor:
Dominici, Gabriele, Barbiero, Pietro, Giannini, Francesco, Gjoreski, Martin, Langhenirich, Marc
Interpretable deep learning aims at developing neural architectures whose decision-making processes could be understood by their users. Among these techniqes, Concept Bottleneck Models enhance the interpretability of neural networks by integrating a
Externí odkaz:
http://arxiv.org/abs/2405.16508
Autor:
Dominici, Gabriele, Barbiero, Pietro, Zarlenga, Mateo Espinosa, Termine, Alberto, Gjoreski, Martin, Marra, Giuseppe, Langheinrich, Marc
Causal opacity denotes the difficulty in understanding the "hidden" causal structure underlying a deep neural network's (DNN) reasoning. This leads to the inability to rely on and verify state-of-the-art DNN-based systems especially in high-stakes sc
Externí odkaz:
http://arxiv.org/abs/2405.16507
Autor:
Fenoglio, Dario, Dominici, Gabriele, Barbiero, Pietro, Tonda, Alberto, Gjoreski, Martin, Langheinrich, Marc
Federated Learning (FL), a privacy-aware approach in distributed deep learning environments, enables many clients to collaboratively train a model without sharing sensitive data, thereby reducing privacy risks. However, enabling human trust and contr
Externí odkaz:
http://arxiv.org/abs/2405.15632
Autor:
Dominici, Gabriele, Barbiero, Pietro, Giannini, Francesco, Gjoreski, Martin, Marra, Giuseppe, Langheinrich, Marc
Current deep learning models are not designed to simultaneously address three fundamental questions: predict class labels to solve a given classification task (the "What?"), explain task predictions (the "Why?"), and imagine alternative scenarios tha
Externí odkaz:
http://arxiv.org/abs/2402.01408
To address the challenge of the ``black-box" nature of deep learning in medical settings, we combine GCExplainer - an automated concept discovery solution - along with Logic Explained Networks to provide global explanations for Graph Neural Networks.
Externí odkaz:
http://arxiv.org/abs/2312.02225
Graph neural networks (GNNs) have led to major breakthroughs in a variety of domains such as drug discovery, social network analysis, and travel time estimation. However, they lack interpretability which hinders human trust and thereby deployment to
Externí odkaz:
http://arxiv.org/abs/2311.15112
Autor:
Crisostomi, Donato, Cannistraci, Irene, Moschella, Luca, Barbiero, Pietro, Ciccone, Marco, Liò, Pietro, Rodolà, Emanuele
Models trained on semantically related datasets and tasks exhibit comparable inter-sample relations within their latent spaces. We investigate in this study the aggregation of such latent spaces to create a unified space encompassing the combined inf
Externí odkaz:
http://arxiv.org/abs/2311.06547