Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Hesse, Robin"'
Attribution maps are one of the most established tools to explain the functioning of computer vision models. They assign importance scores to input features, indicating how relevant each feature is for the prediction of a deep neural network. While m
Externí odkaz:
http://arxiv.org/abs/2407.11910
The field of explainable artificial intelligence (XAI) aims to uncover the inner workings of complex deep neural models. While being crucial for safety-critical domains, XAI inherently lacks ground-truth explanations, making its automatic evaluation
Externí odkaz:
http://arxiv.org/abs/2308.06248
Many convolutional neural networks (CNNs) rely on progressive downsampling of their feature maps to increase the network's receptive field and decrease computational cost. However, this comes at the price of losing granularity in the feature maps, li
Externí odkaz:
http://arxiv.org/abs/2305.09504
Mitigating the dependence on spurious correlations present in the training dataset is a quickly emerging and important topic of deep learning. Recent approaches include priors on the feature attribution of a deep neural network (DNN) into the trainin
Externí odkaz:
http://arxiv.org/abs/2111.07668
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Müller‐Trefzer, Franziska1 (AUTHOR) Franziska.mueller-trefzer@kit.edu, Heinzel, Annette2 (AUTHOR) annette.heinzel@kit.edu, Hesse, Robin1 (AUTHOR), Weisenburger, Alfons2 (AUTHOR), Wetzel, Thomas1,3 (AUTHOR), Niedermeier, Klarissa1 (AUTHOR)
Publikováno v:
Energy Technology. Feb2024, Vol. 12 Issue 2, p1-18. 18p.
Publikováno v:
Automated Technology for Verification & Analysis (9783319465197); 2016, p375-391, 17p