Zobrazeno 1 - 10
of 47
pro vyhledávání: '"Höhne, Marina M. C."'
Heatmaps generated on inputs of image classification networks via explainable AI methods like Grad-CAM and LRP have been observed to resemble segmentations of input images in many cases. Consequently, heatmaps have also been leveraged for achieving w
Externí odkaz:
http://arxiv.org/abs/2407.03009
Autor:
Kopf, Laura, Bommer, Philine Lou, Hedström, Anna, Lapuschkin, Sebastian, Höhne, Marina M. -C., Bykov, Kirill
A crucial aspect of understanding the complex nature of Deep Neural Networks (DNNs) is the ability to explain learned concepts within their latent representations. While various methods exist to connect neurons to textual descriptions of human-unders
Externí odkaz:
http://arxiv.org/abs/2405.20331
Autor:
Bareeva, Dilyara, Höhne, Marina M. -C., Warnecke, Alexander, Pirch, Lukas, Müller, Klaus-Robert, Rieck, Konrad, Bykov, Kirill
Deep Neural Networks (DNNs) are capable of learning complex and versatile representations, however, the semantic nature of the learned concepts remains unknown. A common method used to explain the concepts learned by DNNs is Feature Visualization (FV
Externí odkaz:
http://arxiv.org/abs/2401.06122
Autor:
Liu, Shanghua, Hedström, Anna, Basavegowda, Deepak Hanike, Weltzien, Cornelia, Höhne, Marina M. -C.
Grasslands are known for their high biodiversity and ability to provide multiple ecosystem services. Challenges in automating the identification of indicator plants are key obstacles to large-scale grassland monitoring. These challenges stem from the
Externí odkaz:
http://arxiv.org/abs/2312.08408
Explainable AI (XAI) has unfolded in two distinct research directions with, on the one hand, post-hoc methods that explain the predictions of a pre-trained black-box model and, on the other hand, self-explainable models (SEMs) which are trained direc
Externí odkaz:
http://arxiv.org/abs/2312.07822
Publikováno v:
37th Conference on Neural Information Processing Systems (NeurIPS 2023)
Deep Neural Networks (DNNs) demonstrate remarkable capabilities in learning complex hierarchical data representations, but the nature of these representations remains largely unknown. Existing global explainability methods, such as Network Dissection
Externí odkaz:
http://arxiv.org/abs/2311.13594
Autonomous flying robots, such as multirotors, often rely on deep learning models that make predictions based on a camera image, e.g. for pose estimation. These models can predict surprising results if applied to input images outside the training dom
Externí odkaz:
http://arxiv.org/abs/2308.00344
Autonomous flying robots, e.g. multirotors, often rely on a neural network that makes predictions based on a camera image. These deep learning (DL) models can compute surprising results if applied to input images outside the training domain. Adversar
Externí odkaz:
http://arxiv.org/abs/2305.12859
The utilization of pre-trained networks, especially those trained on ImageNet, has become a common practice in Computer Vision. However, prior research has indicated that a significant number of images in the ImageNet dataset contain watermarks, maki
Externí odkaz:
http://arxiv.org/abs/2303.05498
Explainable artificial intelligence (XAI) methods shed light on the predictions of machine learning algorithms. Several different approaches exist and have already been applied in climate science. However, usually missing ground truth explanations co
Externí odkaz:
http://arxiv.org/abs/2303.00652