Zobrazeno 1 - 10
of 578
pro vyhledávání: '"P. Biecek"'
Autor:
Sobieski, Bartlomiej, Grzywaczewski, Jakub, Sadlej, Bartlomiej, Tivnan, Matthew, Biecek, Przemyslaw
Visual counterfactual explanations (VCEs) have recently gained immense popularity as a tool for clarifying the decision-making process of image classifiers. This trend is largely motivated by what these explanations promise to deliver -- indicate sem
Externí odkaz:
http://arxiv.org/abs/2410.12591
Analysis of 3D segmentation models, especially in the context of medical imaging, is often limited to segmentation performance metrics that overlook the crucial aspect of explainability and bias. Currently, effectively explaining these models with sa
Externí odkaz:
http://arxiv.org/abs/2407.16653
Recent advances in Vision Transformers (ViTs) have significantly enhanced medical image segmentation by facilitating the learning of global relationships. However, these methods face a notable challenge in capturing diverse local and global long-rang
Externí odkaz:
http://arxiv.org/abs/2407.07514
Exact computation of various machine learning explanations requires numerous model evaluations and in extreme cases becomes impractical. The computational cost of approximation increases with an ever-increasing size of data and model parameters. Many
Externí odkaz:
http://arxiv.org/abs/2406.18334
Publikováno v:
Machine Learning and Knowledge Discovery in Databases, vol. 2, pp. 125-142, 2024
We study the robustness of global post-hoc explanations for predictive models trained on tabular data. Effects of predictor features in black-box supervised learning are an essential diagnostic tool for model debugging and scientific discovery in app
Externí odkaz:
http://arxiv.org/abs/2406.09069
The development of Artificial Intelligence for healthcare is of great importance. Models can sometimes achieve even superior performance to human experts, however, they can reason based on spurious features. This is not acceptable to the experts as i
Externí odkaz:
http://arxiv.org/abs/2405.14301
Does the stethoscope in the picture make the adjacent person a doctor or a patient? This, of course, depends on the contextual relationship of the two objects. If it's obvious, why don't explanation methods for vision models use contextual informatio
Externí odkaz:
http://arxiv.org/abs/2404.18316
If AI is the new electricity, what should we do to keep ourselves from getting electrocuted? In this work, we explore factors related to the potential of large language models (LLMs) to manipulate human decisions. We describe the results of two exper
Externí odkaz:
http://arxiv.org/abs/2404.14230
Despite increasing progress in development of methods for generating visual counterfactual explanations, especially with the recent rise of Denoising Diffusion Probabilistic Models, previous works consider them as an entirely local technique. In this
Externí odkaz:
http://arxiv.org/abs/2404.12488
Explainable Artificial Intelligence has gained significant attention due to the widespread use of complex deep learning models in high-stake domains such as medicine, finance, and autonomous cars. However, different explanations often present differe
Externí odkaz:
http://arxiv.org/abs/2404.10387