Zobrazeno 1 - 10
of 1 388
pro vyhledávání: '"Holzinger, Andreas"'
Autor:
Baniecki, Hubert, Chrabaszcz, Maciej, Holzinger, Andreas, Pfeifer, Bastian, Saranti, Anna, Biecek, Przemyslaw
Evaluating explanations of image classifiers regarding ground truth, e.g. segmentation masks defined by human perception, primarily evaluates the quality of the models under consideration rather than the explanation methods themselves. Driven by this
Externí odkaz:
http://arxiv.org/abs/2311.04813
Autor:
Longo, Luca, Brcic, Mario, Cabitza, Federico, Choi, Jaesik, Confalonieri, Roberto, Del Ser, Javier, Guidotti, Riccardo, Hayashi, Yoichi, Herrera, Francisco, Holzinger, Andreas, Jiang, Richard, Khosravi, Hassan, Lecue, Freddy, Malgieri, Gianclaudio, Páez, Andrés, Samek, Wojciech, Schneider, Johannes, Speith, Timo, Stumpf, Simone
Publikováno v:
Information Fusion 2024
As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with prac
Externí odkaz:
http://arxiv.org/abs/2310.19775
Autor:
Pfeifer, Bastian, Krzyzinski, Mateusz, Baniecki, Hubert, Saranti, Anna, Holzinger, Andreas, Biecek, Przemyslaw
Explainable AI (XAI) is an increasingly important area of machine learning research, which aims to make black-box models transparent and interpretable. In this paper, we propose a novel approach to XAI that uses the so-called counterfactual paths gen
Externí odkaz:
http://arxiv.org/abs/2307.07764
Large language models, e.g. ChatGPT are currently contributing enormously to make artificial intelligence even more popular, especially among the general population. However, such chatbot models were developed as tools to support natural language com
Externí odkaz:
http://arxiv.org/abs/2305.10646