Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Rabold, Johannes"'
Explanations for Convolutional Neural Networks (CNNs) based on relevance of input pixels might be too unspecific to evaluate which and how input features impact model decisions. Especially in complex real-world domains like biology, the presence of s
Externí odkaz:
http://arxiv.org/abs/2405.01661
In recent research, human-understandable explanations of machine learning models have received a lot of attention. Often explanations are given in form of model simplifications or visualizations. However, as shown in cognitive science as well as in e
Externí odkaz:
http://arxiv.org/abs/2106.08064
Explainable AI has emerged to be a key component for black-box machine learning approaches in domains with a high demand for reliability or transparency. Examples are medical assistant systems, and applications concerned with the General Data Protect
Externí odkaz:
http://arxiv.org/abs/2105.07371
End-to-end learning with deep neural networks, such as convolutional neural networks (CNNs), has been demonstrated to be very successful for different tasks of image classification. To make decisions of black-box approaches transparent, different sol
Externí odkaz:
http://arxiv.org/abs/1910.07856
With the increasing number of deep learning applications, there is a growing demand for explanations. Visual explanations provide information about which parts of an image are relevant for a classifier's decision. However, highlighting of image parts
Externí odkaz:
http://arxiv.org/abs/1910.01837
Autor:
Rabold, Johannes
Publikováno v:
KI: Künstliche Intelligenz; Dec2022, Vol. 36 Issue 3/4, p225-235, 11p
Publikováno v:
Machine Learning; May2022, Vol. 111 Issue 5, p1799-1820, 22p