Trust-informed Decision-Making Through An Uncertainty-Aware Stacked Neural Networks Framework: Case Study in COVID-19 Classification
Autor: | Gharoun, Hassan, Khorshidi, Mohammad Sadegh, Chen, Fang, Gandomi, Amir H. |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | This study presents an uncertainty-aware stacked neural networks model for the reliable classification of COVID-19 from radiological images. The model addresses the critical gap in uncertainty-aware modeling by focusing on accurately identifying confidently correct predictions while alerting users to confidently incorrect and uncertain predictions, which can promote trust in automated systems. The architecture integrates uncertainty quantification methods, including Monte Carlo dropout and ensemble techniques, to enhance predictive reliability by assessing the certainty of diagnostic predictions. Within a two-tier model framework, the tier one model generates initial predictions and associated uncertainties, which the second tier model uses to produce a trust indicator alongside the diagnostic outcome. This dual-output model not only predicts COVID-19 cases but also provides a trust flag, indicating the reliability of each diagnosis and aiming to minimize the need for retesting and expert verification. The effectiveness of this approach is demonstrated through extensive experiments on the COVIDx CXR-4 dataset, showing a novel approach in identifying and handling confidently incorrect cases and uncertain cases, thus enhancing the trustworthiness of automated diagnostics in clinical settings. Comment: 15 pages, 7 figures, 6 tables |
Databáze: | arXiv |
Externí odkaz: |