Integrating Structural and Functional Imaging for Computer Assisted Detection of Prostate Cancer on Multi-Protocol In Vivo 3 Tesla MRI
Autor: | Robert Toth, Elizabeth Genega, Neil Rofsky, Jonathan Chappelow, B. Nicolas Bloch, Satish Viswanath, Anant Madabhushi, Robert E. Lenkinski, Mark A. Rosen, Arjun Kalyanpur |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2009 |
Předmět: |
medicine.diagnostic_test
Computer science Prostatectomy business.industry medicine.medical_treatment Supervised learning Image registration Magnetic resonance imaging Histology medicine.disease Sensor fusion Article Functional imaging Prostate cancer medicine.anatomical_structure Prostate Active shape model Biopsy medicine Segmentation Computer vision Artificial intelligence business |
Zdroj: | Medical Imaging: Computer-Aided Diagnosis |
Popis: | Screening and detection of prostate cancer (CaP) currently lacks an image-based protocol which is reflected in the high false negative rates currently associated with blinded sextant biopsies. Multi-protocol magnetic resonance imaging (MRI) offers high resolution functional and structural data about internal body structures (such as the prostate). In this paper we present a novel comprehensive computer-aided scheme for CaP detection from high resolution in vivo multi-protocol MRI by integrating functional and structural information obtained via dynamic-contrast enhanced (DCE) and T2-weighted (T2-w) MRI, respectively. Our scheme is fully-automated and comprises (a) prostate segmentation, (b) multimodal image registration, and (c) data representation and multi-classifier modules for information fusion. Following prostate boundary segmentation via an improved active shape model, the DCE/T2-w protocols and the T2-w/ex vivo histological prostatectomy specimens are brought into alignment via a deformable, multi-attribute registration scheme. T2-w/histology alignment allows for the mapping of true CaP extent onto the in vivo MRI, which is used for training and evaluation of a multi-protocol MRI CaP classifier. The meta-classifier used is a random forest constructed by bagging multiple decision tree classifiers, each trained individually on T2-w structural, textural and DCE functional attributes. 3-fold classifier cross validation was performed using a set of 18 images derived from 6 patient datasets on a per-pixel basis. Our results show that the results of CaP detection obtained from integration of T2-w structural textural data and DCE functional data (area under the ROC curve of 0.815) significantly outperforms detection based on either of the individual modalities (0.704 (T2-w) and 0.682 (DCE)). It was also found that a meta-classifier trained directly on integrated T2-w and DCE data (data-level integration) significantly outperformed a decision-level meta-classifier, constructed by combining the classifier outputs from the individual T2-w and DCE channels. |
Databáze: | OpenAIRE |
Externí odkaz: |