Zobrazeno 1 - 10
of 31
pro vyhledávání: '"Stammer, Wolfgang"'
The challenge in object-based visual reasoning lies in generating concept representations that are both descriptive and distinct. Achieving this in an unsupervised manner requires human users to understand the model's learned concepts and, if necessa
Externí odkaz:
http://arxiv.org/abs/2406.09949
Autor:
Wüst, Antonia, Stammer, Wolfgang, Delfosse, Quentin, Dhami, Devendra Singh, Kersting, Kristian
The challenge in learning abstract concepts from images in an unsupervised fashion lies in the required integration of visual perception and generalizable relational reasoning. Moreover, the unsupervised nature of this task makes it necessary for hum
Externí odkaz:
http://arxiv.org/abs/2402.08280
Autor:
Busch, Florian Peter, Kamath, Roshni, Mitchell, Rupert, Stammer, Wolfgang, Kersting, Kristian, Mundt, Martin
A dataset is confounded if it is most easily solved via a spurious correlation, which fails to generalize to new data. In this work, we show that, in a continual learning setting where confounders may vary in time across tasks, the challenge of mitig
Externí odkaz:
http://arxiv.org/abs/2402.06434
Autor:
Delfosse, Quentin, Sztwiertnia, Sebastian, Rothermel, Mark, Stammer, Wolfgang, Kersting, Kristian
Goal misalignment, reward sparsity and difficult credit assignment are only a few of the many issues that make it difficult for deep reinforcement learning (RL) agents to learn optimal policies. Unfortunately, the black-box nature of deep neural netw
Externí odkaz:
http://arxiv.org/abs/2401.05821
Autor:
Stammer, Wolfgang, Friedrich, Felix, Steinmann, David, Brack, Manuel, Shindo, Hikaru, Kersting, Kristian
Publikováno v:
Transactions on Machine Learning Research 2024
Much of explainable AI research treats explanations as a means for model inspection. Yet, this neglects findings from human psychology that describe the benefit of self-explanations in an agent's learning process. Motivated by this, we introduce a no
Externí odkaz:
http://arxiv.org/abs/2309.08395
While deep learning models often lack interpretability, concept bottleneck models (CBMs) provide inherent explanations via their concept representations. Moreover, they allow users to perform interventional interactions on these concepts by updating
Externí odkaz:
http://arxiv.org/abs/2308.13453
Despite the successes of recent developments in visual AI, different shortcomings still exist; from missing exact logical reasoning, to abstract generalization abilities, to understanding complex and noisy scenes. Unfortunately, existing benchmarks,
Externí odkaz:
http://arxiv.org/abs/2306.07743
Autor:
Delfosse, Quentin, Stammer, Wolfgang, Rothenbacher, Thomas, Vittal, Dwarak, Kersting, Kristian
Publikováno v:
Machine Learning and Knowledge Discovery in Databases: Research Track. ECML PKDD 2023. Lecture Notes in Computer Science(), vol 14172. Springer, Cham
Recent unsupervised multi-object detection models have shown impressive performance improvements, largely attributed to novel architectural inductive biases. Unfortunately, they may produce suboptimal object encodings for downstream tasks. To overcom
Externí odkaz:
http://arxiv.org/abs/2211.09771
Current transformer language models (LM) are large-scale models with billions of parameters. They have been shown to provide high performances on a variety of tasks but are also prone to shortcut learning and bias. Addressing such incorrect model beh
Externí odkaz:
http://arxiv.org/abs/2210.10332
As machine learning models become increasingly larger, trained weakly supervised on large, possibly uncurated data sets, it becomes increasingly important to establish mechanisms for inspecting, interacting, and revising models to mitigate learning s
Externí odkaz:
http://arxiv.org/abs/2203.03668