Are Convolutional Neural Networks Trained on ImageNet Images Wearing Rose‐Colored Glasses?: A Quantitative Comparison of ImageNet , Computed Tomographic, Magnetic Resonance, Chest X‐Ray , and Point‐of‐Care Ultrasound Images for Quality

Autor: Laura N Blaivas, Michael Blaivas
Rok vydání: 2020
Předmět:
Zdroj: Journal of Ultrasound in Medicine. 40:377-383
ISSN: 1550-9613
0278-4297
DOI: 10.1002/jum.15413
Popis: Objectives Deep learning for medical imaging analysis uses convolutional neural networks pretrained on ImageNet (Stanford Vision Lab, Stanford, CA). Little is known about how such color- and scene-rich standard training images compare quantitatively to medical images. We sought to quantitatively compare ImageNet images to point-of-care ultrasound (POCUS), computed tomographic (CT), magnetic resonance (MR), and chest x-ray (CXR) images. Methods Using a quantitative image quality assessment technique (Blind/Referenceless Image Spatial Quality Evaluator), we compared images based on pixel complexity, relationships, variation, and distinguishing features. We compared 5500 ImageNet images to 2700 CXR, 2300 CT, 1800 MR, and 18,000 POCUS images. Image quality results ranged from 0 to 100 (worst). A 1-way analysis of variance was performed, and the standardized mean-difference effect size value (d) was calculated. Results ImageNet images showed the best image quality rating of 21.7 (95% confidence interval [CI], 0.41) except for CXR at 13.2 (95% CI, 0.28), followed by CT at 35.1 (95% CI, 0.79), MR at 31.6 (95% CI, 0.75), and POCUS at 56.6 (95% CI, 0.21). The differences between ImageNet and all of the medical images were statistically significant (P ≤ .000001). The greatest difference in image quality was between ImageNet and POCUS (d = 2.38). Conclusions Point-of-care ultrasound (US) quality is significantly different from that of ImageNet and other medical images. This brings considerable implications for convolutional neural network training with medical images for various applications, which may be even more significant in the case of US images. Ultrasound deep learning developers should consider pretraining networks from scratch on US images, as training techniques used for CT, CXR, and MR images may not apply to US.
Databáze: OpenAIRE