Zobrazeno 1 - 10
of 158
pro vyhledávání: '"Montavon, Grégoire"'
Autor:
Esders, Malte, Schnake, Thomas, Lederer, Jonas, Kabylda, Adil, Montavon, Grégoire, Tkatchenko, Alexandre, Müller, Klaus-Robert
While machine learning (ML) models have been able to achieve unprecedented accuracies across various prediction tasks in quantum chemistry, it is now apparent that accuracy on a test set alone is not a guarantee for robust chemical modeling such as s
Externí odkaz:
http://arxiv.org/abs/2410.13833
Autor:
Schnake, Thomas, Jafari, Farnoush Rezaei, Lederer, Jonas, Xiong, Ping, Nakajima, Shinichi, Gugler, Stefan, Montavon, Grégoire, Müller, Klaus-Robert
Explainable Artificial Intelligence (XAI) plays a crucial role in fostering transparency and trust in AI systems, where traditional XAI approaches typically offer one level of abstraction for explanations, often in the form of heatmaps highlighting s
Externí odkaz:
http://arxiv.org/abs/2408.17198
Autor:
Kauffmann, Jacob, Dippel, Jonas, Ruff, Lukas, Samek, Wojciech, Müller, Klaus-Robert, Montavon, Grégoire
Unsupervised learning has become an essential building block of AI systems. The representations it produces, e.g. in foundation models, are critical to a wide variety of downstream applications. It is therefore important to carefully examine unsuperv
Externí odkaz:
http://arxiv.org/abs/2408.08041
Recent sequence modeling approaches using Selective State Space Sequence Models, referred to as Mamba models, have seen a surge of interest. These models allow efficient processing of long sequences in linear time and are rapidly being adopted in a w
Externí odkaz:
http://arxiv.org/abs/2406.07592
In recent years, Explainable AI (XAI) methods have facilitated profound validation and knowledge extraction from ML models. While extensively studied for classification, few XAI solutions have addressed the challenges specific to regression models. I
Externí odkaz:
http://arxiv.org/abs/2403.07486
Explainable AI has brought transparency into complex ML blackboxes, enabling, in particular, to identify which features these models use for their predictions. So far, the question of explaining predictive uncertainty, i.e. why a model 'doubts', has
Externí odkaz:
http://arxiv.org/abs/2401.17441
Autor:
Eberle, Oliver, Büttner, Jochen, El-Hajj, Hassan, Montavon, Grégoire, Müller, Klaus-Robert, Valleriani, Matteo
Historical materials are abundant. Yet, piecing together how human knowledge has evolved and spread both diachronically and synchronically remains a challenge that can so far only be very selectively addressed. The vast volume of materials precludes
Externí odkaz:
http://arxiv.org/abs/2310.09091
Autor:
Bender, Sidney, Anders, Christopher J., Chormai, Pattarawatt, Marxfeld, Heike, Herrmann, Jan, Montavon, Grégoire
This paper introduces a novel technique called counterfactual knowledge distillation (CFKD) to detect and remove reliance on confounders in deep learning models with the help of human expert feedback. Confounders are spurious features that models ten
Externí odkaz:
http://arxiv.org/abs/2310.01011
Robustness has become an important consideration in deep learning. With the help of explainable AI, mismatches between an explained model's decision strategy and the user's domain knowledge (e.g. Clever Hans effects) have been identified as a startin
Externí odkaz:
http://arxiv.org/abs/2304.05727