Automated diagnosis of prostate cancer in multi-parametric MRI based on multimodal convolutional neural networks.

Autor: Minh Hung Le, Jingyu Chen, Liang Wang, Zhiwei Wang, Wenyu Liu, Kwang-Ting (Tim) Cheng, Xin Yang
Předmět:
Zdroj: Physics in Medicine & Biology; 8/21/2017, Vol. 62 Issue 16, p1-1, 1p
Abstrakt: Automated methods for prostate cancer (PCa) diagnosis in multi-parametric magnetic resonance imaging (MP-MRIs) are critical for alleviating requirements for interpretation of radiographs while helping to improve diagnostic accuracy (Artan et al 2010 IEEE Trans. Image Process. 19 2444–55, Litjens et al 2014 IEEE Trans. Med. Imaging33 1083–92, Liu et al 2013 SPIE Medical Imaging (International Society for Optics and Photonics) p 86701G, Moradi et al 2012 J. Magn. Reson. Imaging35 1403–13, Niaf et al 2014 IEEE Trans. Image Process. 23 979–91, Niaf et al 2012 Phys. Med. Biol. 57 3833, Peng et al 2013a SPIE Medical Imaging (International Society for Optics and Photonics) p 86701H, Peng et al 2013b Radiology267 787–96, Wang et al 2014 BioMed. Res. Int. 2014). This paper presents an automated method based on multimodal convolutional neural networks (CNNs) for two PCa diagnostic tasks: (1) distinguishing between cancerous and noncancerous tissues and (2) distinguishing between clinically significant (CS) and indolent PCa. Specifically, our multimodal CNNs effectively fuse apparent diffusion coefficients (ADCs) and T2-weighted MP-MRI images (T2WIs). To effectively fuse ADCs and T2WIs we design a new similarity loss function to enforce consistent features being extracted from both ADCs and T2WIs. The similarity loss is combined with the conventional classification loss functions and integrated into the back-propagation procedure of CNN training. The similarity loss enables better fusion results than existing methods as the feature learning processes of both modalities are mutually guided, jointly facilitating CNN to ‘see’ the true visual patterns of PCa. The classification results of multimodal CNNs are further combined with the results based on handcrafted features using a support vector machine classifier. To achieve a satisfactory accuracy for clinical use, we comprehensively investigate three critical factors which could greatly affect the performance of our multimodal CNNs but have not been carefully studied previously. (1) Given limited training data, how can these be augmented in sufficient numbers and variety for fine-tuning deep CNN networks for PCa diagnosis? (2) How can multimodal MP-MRI information be effectively combined in CNNs? (3) What is the impact of different CNN architectures on the accuracy of PCa diagnosis? Experimental results on extensive clinical data from 364 patients with a total of 463 PCa lesions and 450 identified noncancerous image patches demonstrate that our system can achieve a sensitivity of 89.85% and a specificity of 95.83% for distinguishing cancer from noncancerous tissues and a sensitivity of 100% and a specificity of 76.92% for distinguishing indolent PCa from CS PCa. This result is significantly superior to the state-of-the-art method relying on handcrafted features. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index