Out-of-distribution detection with in-distribution voting using the medical example of chest x-ray classification.
Autor: | Wollek A; Munich Institute of Biomedical Engineering and the School of Computation, Information, and Technology, Technical University of Munich, Munich, Germany., Willem T; Institute for History and Ethics in Medicine and Munich School of Technology in Society, Technical University of Munich, Munich, Germany., Ingrisch M; Department of Radiology, University Hospital Ludwig-Maximilians-Universität, Munich, Germany., Sabel B; Department of Radiology, University Hospital Ludwig-Maximilians-Universität, Munich, Germany., Lasser T; Munich Institute of Biomedical Engineering and the School of Computation, Information, and Technology, Technical University of Munich, Munich, Germany. |
---|---|
Jazyk: | angličtina |
Zdroj: | Medical physics [Med Phys] 2024 Apr; Vol. 51 (4), pp. 2721-2732. Date of Electronic Publication: 2023 Oct 13. |
DOI: | 10.1002/mp.16790 |
Abstrakt: | Background: Deep learning models are being applied to more and more use cases with astonishing success stories, but how do they perform in the real world? Models are typically tested on specific cleaned data sets, but when deployed in the real world, the model will encounter unexpected, out-of-distribution (OOD) data. Purpose: To investigate the impact of OOD radiographs on existing chest x-ray classification models and to increase their robustness against OOD data. Methods: The study employed the commonly used chest x-ray classification model, CheXnet, trained on the chest x-ray 14 data set, and tested its robustness against OOD data using three public radiography data sets: IRMA, Bone Age, and MURA, and the ImageNet data set. To detect OOD data for multi-label classification, we proposed in-distribution voting (IDV). The OOD detection performance is measured across data sets using the area under the receiver operating characteristic curve (AUC) analysis and compared with Mahalanobis-based OOD detection, MaxLogit, MaxEnergy, self-supervised OOD detection (SS OOD), and CutMix. Results: Without additional OOD detection, the chest x-ray classifier failed to discard any OOD images, with an AUC of 0.5. The proposed IDV approach trained on ID (chest x-ray 14) and OOD data (IRMA and ImageNet) achieved, on average, 0.999 OOD AUC across the three data sets, surpassing all other OOD detection methods. Mahalanobis-based OOD detection achieved an average OOD detection AUC of 0.982. IDV trained solely with a few thousand ImageNet images had an AUC 0.913, which was considerably higher than MaxLogit (0.726), MaxEnergy (0.724), SS OOD (0.476), and CutMix (0.376). Conclusions: The performance of all tested OOD detection methods did not translate well to radiography data sets, except Mahalanobis-based OOD detection and the proposed IDV method. Consequently, training solely on ID data led to incorrect classification of OOD images as ID, resulting in increased false positive rates. IDV substantially improved the model's ID classification performance, even when trained with data that will not occur in the intended use case or test set (ImageNet), without additional inference overhead or performance decrease in the target classification. The corresponding code is available at https://gitlab.lrz.de/IP/a-knee-cannot-have-lung-disease. (© 2023 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.) |
Databáze: | MEDLINE |
Externí odkaz: |