A deep dive into understanding tumor foci classification using multiparametric MRI based on convolutional neural network.
Autor: | Zong W; Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, 48202, USA., Lee JK; Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, 48202, USA., Liu C; Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, 48202, USA., Carver EN; Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, 48202, USA.; Medical Physics Division, Department of Oncology, Wayne State University School of Medicine, Detroit, MI, 48201, USA., Feldman AM; Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, 48202, USA., Janic B; Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, 48202, USA., Elshaikh MA; Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, 48202, USA., Pantelic MV; Department of Radiology, Henry Ford Health System, Detroit, MI, 48202, USA., Hearshen D; Department of Radiology, Henry Ford Health System, Detroit, MI, 48202, USA., Chetty IJ; Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, 48202, USA., Movsas B; Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, 48202, USA., Wen N; Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, 48202, USA. |
---|---|
Jazyk: | angličtina |
Zdroj: | Medical physics [Med Phys] 2020 Sep; Vol. 47 (9), pp. 4077-4086. Date of Electronic Publication: 2020 Jun 12. |
DOI: | 10.1002/mp.14255 |
Abstrakt: | Purpose: Deep learning models have had a great success in disease classifications using large data pools of skin cancer images or lung X-rays. However, data scarcity has been the roadblock of applying deep learning models directly on prostate multiparametric MRI (mpMRI). Although model interpretation has been heavily studied for natural images for the past few years, there has been a lack of interpretation of deep learning models trained on medical images. In this paper, an efficient convolutional neural network (CNN) was developed and the model interpretation at various convolutional layers was systematically analyzed to improve the understanding of how CNN interprets multimodality medical images and the predictive powers of features at each layer. The problem of small sample size was addressed by feeding the intermediate features into a traditional classification algorithm known as weighted extreme learning machine (wELM), with imbalanced distribution among output categories taken into consideration. Methods: The training data collection used a retrospective set of prostate MR studies, from SPIE-AAPM-NCI PROSTATEx Challenges held in 2017. Three hundred twenty biopsy samples of lesions from 201 prostate cancer patients were diagnosed and identified as clinically significant (malignant) or not significant (benign). All studies included T2-weighted (T2W), proton density-weighted (PD-W), dynamic contrast enhanced (DCE) and diffusion-weighted (DW) imaging. After registration and lesion-based normalization, a CNN with four convolutional layers were developed and trained on tenfold cross validation. The features from intermediate layers were then extracted as input to wELM to test the discriminative power of each individual layer. The best performing model from the tenfolds was chosen to be tested on the holdout cohort from two sources. Feature maps after each convolutional layer were then visualized to monitor the trend, as the layer propagated. Scatter plotting was used to visualize the transformation of data distribution. Finally, a class activation map was generated to highlight the region of interest based on the model perspective. Results: Experimental trials indicated that the best input for CNN was a modality combination of T2W, apparent diffusion coefficient (ADC) and DWI (© 2020 American Association of Physicists in Medicine.) |
Databáze: | MEDLINE |
Externí odkaz: |