Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Roschewitz, Mélanie"'
Autor:
Jones, Charles, Ribeiro, Fabio de Sousa, Roschewitz, Mélanie, Castro, Daniel C., Glocker, Ben
We investigate the prominent class of fair representation learning methods for bias mitigation. Using causal reasoning to define and formalise different sources of dataset bias, we reveal important implicit assumptions inherent to these methods. We p
Externí odkaz:
http://arxiv.org/abs/2410.04120
This study investigates the effects of radio-opaque artefacts, such as skin markers, breast implants, and pacemakers, on mammography classification models. After manually annotating 22,012 mammograms from the publicly available EMBED dataset, a robus
Externí odkaz:
http://arxiv.org/abs/2410.03809
Contrastive pretraining can substantially increase model generalisation and downstream performance. However, the quality of the learned representations is highly dependent on the data augmentation strategy applied to generate positive pairs. Positive
Externí odkaz:
http://arxiv.org/abs/2409.10365
Causal generative modelling is gaining interest in medical imaging due to its ability to answer interventional and counterfactual queries. Most work focuses on generating counterfactual images that look plausible, using auxiliary classifiers to enfor
Externí odkaz:
http://arxiv.org/abs/2403.09422
Contrastive pretraining is well-known to improve downstream task performance and model generalisation, especially in limited label settings. However, it is sensitive to the choice of augmentation pipeline. Positive pairs should preserve semantic info
Externí odkaz:
http://arxiv.org/abs/2403.09605
Medical image segmentation is a challenging task, made more difficult by many datasets' limited size and annotations. Denoising diffusion probabilistic models (DDPM) have recently shown promise in modelling the distribution of natural images and were
Externí odkaz:
http://arxiv.org/abs/2311.07421
Autor:
Roschewitz, Mélanie, Glocker, Ben
Publikováno v:
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 4549-4559
Performance estimation under covariate shift is a crucial component of safe AI model deployment, especially for sensitive use-cases. Recently, several solutions were proposed to tackle this problem, most leveraging model predictions or softmax confid
Externí odkaz:
http://arxiv.org/abs/2308.07223
We investigate performance disparities in deep classifiers. We find that the ability of classifiers to separate individuals into subgroups varies substantially across medical imaging modalities and protected characteristics; crucially, we show that t
Externí odkaz:
http://arxiv.org/abs/2307.02791
Purpose: To analyze a recently published chest radiography foundation model for the presence of biases that could lead to subgroup performance disparities across biological sex and race. Materials and Methods: This retrospective study used 127,118 ch
Externí odkaz:
http://arxiv.org/abs/2209.02965
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.