Zobrazeno 1 - 10
of 24
pro vyhledávání: '"Böhle, Moritz"'
Concept Bottleneck Models (CBMs) have recently been proposed to address the 'black-box' problem of deep neural networks, by first mapping images to a human-understandable concept space and then linearly combining concepts for classification. Such mod
Externí odkaz:
http://arxiv.org/abs/2407.14499
Knowledge Distillation (KD) has proven effective for compressing large teacher models into smaller student models. While it is well known that student models can achieve similar accuracies as the teachers, it has also been shown that they nonetheless
Externí odkaz:
http://arxiv.org/abs/2402.03119
We present a new direction for increasing the interpretability of deep neural networks (DNNs) by promoting weight-input alignment during training. For this, we propose to replace the linear transformations in DNNs by our novel B-cos transformation. A
Externí odkaz:
http://arxiv.org/abs/2306.10898
Most approaches for self-supervised learning (SSL) are optimised on curated balanced datasets, e.g. ImageNet, despite the fact that natural data usually exhibits long-tail distributions. In this paper, we analyse the behaviour of one of the most popu
Externí odkaz:
http://arxiv.org/abs/2303.13664
Publikováno v:
2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2023, pp. 1922-1933
Despite being highly performant, deep neural networks might base their decisions on features that spuriously correlate with the provided labels, thus hurting generalization. To mitigate this, 'model guidance' has recently gained popularity, i.e. the
Externí odkaz:
http://arxiv.org/abs/2303.11932
Publikováno v:
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 6, pp. 4090-4101, June 2024
Deep neural networks are very successful on many vision tasks, but hard to interpret due to their black box nature. To overcome this, various post-hoc attribution methods have been proposed to identify image regions most influential to the models' de
Externí odkaz:
http://arxiv.org/abs/2303.11884
Transformers increasingly dominate the machine learning landscape across many tasks and domains, which increases the importance for understanding their outputs. While their attention modules provide partial insight into their inner workings, the atte
Externí odkaz:
http://arxiv.org/abs/2301.08669
Publikováno v:
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022, pp. 10213-10222
Deep neural networks are very successful on many vision tasks, but hard to interpret due to their black box nature. To overcome this, various post-hoc attribution methods have been proposed to identify image regions most influential to the models' de
Externí odkaz:
http://arxiv.org/abs/2205.10435
We present a new direction for increasing the interpretability of deep neural networks (DNNs) by promoting weight-input alignment during training. For this, we propose to replace the linear transforms in DNNs by our B-cos transform. As we show, a seq
Externí odkaz:
http://arxiv.org/abs/2205.10268
Publikováno v:
Published in IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume 45, Issue: 6, 01 June 2023, Page(s): 7625 - 7638)
We introduce a new family of neural network models called Convolutional Dynamic Alignment Networks (CoDA Nets), which are performant classifiers with a high degree of inherent interpretability. Their core building blocks are Dynamic Alignment Units (
Externí odkaz:
http://arxiv.org/abs/2109.13004