Zobrazeno 1 - 10
of 743
pro vyhledávání: '"Wiegand, Thomas"'
Autor:
Bareeva, Dilyara, Yolcu, Galip Ümit, Hedström, Anna, Schmolenski, Niklas, Wiegand, Thomas, Samek, Wojciech, Lapuschkin, Sebastian
In recent years, training data attribution (TDA) methods have emerged as a promising direction for the interpretability of neural networks. While research around TDA is thriving, limited effort has been dedicated to the evaluation of attributions. Si
Externí odkaz:
http://arxiv.org/abs/2410.07158
Autor:
Naujoks, Jonas R., Krasowski, Aleksander, Weckbecker, Moritz, Wiegand, Thomas, Lapuschkin, Sebastian, Samek, Wojciech, Klausen, René P.
Recently, physics-informed neural networks (PINNs) have emerged as a flexible and promising application of deep learning to partial differential equations in the physical sciences. While offering strong performance and competitive inference speeds on
Externí odkaz:
http://arxiv.org/abs/2409.08958
Autor:
Hatefi, Sayed Mohammad Vakilzadeh, Dreyer, Maximilian, Achtibat, Reduan, Wiegand, Thomas, Samek, Wojciech, Lapuschkin, Sebastian
To solve ever more complex problems, Deep Neural Networks are scaled to billions of parameters, leading to huge computational costs. An effective approach to reduce computational requirements and increase efficiency is to prune unnecessary components
Externí odkaz:
http://arxiv.org/abs/2408.12568
Local data attribution (or influence estimation) techniques aim at estimating the impact that individual data points seen during training have on particular predictions of an already trained Machine Learning model during test time. Previous methods e
Externí odkaz:
http://arxiv.org/abs/2402.12118
Autor:
Achtibat, Reduan, Hatefi, Sayed Mohammad Vakilzadeh, Dreyer, Maximilian, Jain, Aakriti, Wiegand, Thomas, Lapuschkin, Sebastian, Samek, Wojciech
Large Language Models are prone to biased predictions and hallucinations, underlining the paramount importance of understanding their model-internal reasoning process. However, achieving faithful attributions for the entirety of a black-box transform
Externí odkaz:
http://arxiv.org/abs/2402.05602
Autor:
Weber, Leander, Berend, Jim, Binder, Alexander, Wiegand, Thomas, Samek, Wojciech, Lapuschkin, Sebastian
In this paper, we present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors that utilizes explainability, specifically Layer-wise Relevance Propagation(LRP), to assign rewards to individual connection
Externí odkaz:
http://arxiv.org/abs/2308.12053
Autor:
Dreyer, Maximilian, Achtibat, Reduan, Wiegand, Thomas, Samek, Wojciech, Lapuschkin, Sebastian
Applying traditional post-hoc attribution methods to segmentation or object detection predictors offers only limited insights, as the obtained feature attribution maps at input level typically resemble the models' predicted segmentation mask or bound
Externí odkaz:
http://arxiv.org/abs/2211.11426
Autor:
Achtibat, Reduan, Dreyer, Maximilian, Eisenbraun, Ilona, Bosse, Sebastian, Wiegand, Thomas, Samek, Wojciech, Lapuschkin, Sebastian
Publikováno v:
Nature Machine Intelligence (year 2023, volume 5, pages 1006-1019)
The field of eXplainable Artificial Intelligence (XAI) aims to bring transparency to today's powerful but opaque deep learning models. While local XAI methods explain individual predictions in form of attribution maps, thereby identifying where impor
Externí odkaz:
http://arxiv.org/abs/2206.03208
Autor:
Pahde, Frederik, Dreyer, Maximilian, Weber, Leander, Weckbecker, Moritz, Anders, Christopher J., Wiegand, Thomas, Samek, Wojciech, Lapuschkin, Sebastian
With a growing interest in understanding neural network prediction strategies, Concept Activation Vectors (CAVs) have emerged as a popular tool for modeling human-understandable concepts in the latent space. Commonly, CAVs are computed by leveraging
Externí odkaz:
http://arxiv.org/abs/2202.03482