Automated abnormality classification of chest radiographs using deep convolutional neural networks
Autor: | Youbao Tang, Zhiyong Lu, Mei Han, Ronald M. Summers, Catherine Brandon, Ke Yan, Yifan Peng, Yuxing Tang, Mohammadhadi Bagheri, Jing Xiao, Bernadette Redd |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
Prioritization
medicine.medical_specialty Radiology workflow Radiography Computer applications to medicine. Medical informatics R858-859.7 Medicine (miscellaneous) Health Informatics lcsh:Computer applications to medicine. Medical informatics Convolutional neural network Article 030218 nuclear medicine & medical imaging 03 medical and health sciences 0302 clinical medicine Health Information Management medicine Medical imaging 030212 general & internal medicine Adult patients business.industry Medical practice Computer Science Applications lcsh:R858-859.7 Radiology Abnormality business Biomedical engineering |
Zdroj: | npj Digital Medicine, Vol 3, Iss 1, Pp 1-8 (2020) NPJ Digital Medicine |
ISSN: | 2398-6352 |
Popis: | As one of the most ubiquitous diagnostic imaging tests in medical practice, chest radiography requires timely reporting of potential findings and diagnosis of diseases in the images. Automated, fast, and reliable detection of diseases based on chest radiography is a critical step in radiology workflow. In this work, we developed and evaluated various deep convolutional neural networks (CNN) for differentiating between normal and abnormal frontal chest radiographs, in order to help alert radiologists and clinicians of potential abnormal findings as a means of work list triaging and reporting prioritization. A CNN-based model achieved an AUC of 0.9824 ± 0.0043 (with an accuracy of 94.64 ± 0.45%, a sensitivity of 96.50 ± 0.36% and a specificity of 92.86 ± 0.48%) for normal versus abnormal chest radiograph classification. The CNN model obtained an AUC of 0.9804 ± 0.0032 (with an accuracy of 94.71 ± 0.32%, a sensitivity of 92.20 ± 0.34% and a specificity of 96.34 ± 0.31%) for normal versus lung opacity classification. Classification performance on the external dataset showed that the CNN model is likely to be highly generalizable, with an AUC of 0.9444 ± 0.0029. The CNN model pre-trained on cohorts of adult patients and fine-tuned on pediatric patients achieved an AUC of 0.9851 ± 0.0046 for normal versus pneumonia classification. Pretraining with natural images demonstrates benefit for a moderate-sized training image set of about 8500 images. The remarkable performance in diagnostic accuracy observed in this study shows that deep CNNs can accurately and effectively differentiate normal and abnormal chest radiographs, thereby providing potential benefits to radiology workflow and patient care. |
Databáze: | OpenAIRE |
Externí odkaz: |