Autor: |
Smedley NF; Medical Imaging Informatics, Departments of Radiological Sciences and Bioengineering, University of California, Los Angeles., Hsu W; Medical Imaging Informatics, Departments of Radiological Sciences and Bioengineering, University of California, Los Angeles. |
Jazyk: |
angličtina |
Zdroj: |
Proceedings. IEEE International Symposium on Biomedical Imaging [Proc IEEE Int Symp Biomed Imaging] 2018 Apr; Vol. 2018, pp. 1529-1533. Date of Electronic Publication: 2018 May 24. |
DOI: |
10.1109/ISBI.2018.8363864 |
Abstrakt: |
Radiogenomic studies have suggested that biological heterogeneity of tumors is reflected radiographically through visible features on magnetic resonance (MR) images. We apply deep learning techniques to map between tumor gene expression profiles and tumor morphology in pre-operative MR studies of glioblastoma patients. A deep autoencoder was trained on 528 patients, each with 12,042 gene expressions. Then, the autoencoder's weights were used to initialize a supervised deep neural network. The supervised model was trained using a subset of 109 patients with both gene and MR data. For each patient, 20 morphological image features were extracted from contrast-enhancing and peritumoral edema regions. We found that neural network pre-trained with an autoencoder and dropout had lower errors than linear regression in predicting tumor morphology features by an average of 16.98% mean absolute percent error and 0.0114 mean absolute error, where several features were significantly different (adjusted p - value < 0.05). These results indicate neural networks, which can incorporate nonlinear, hierarchical relationships between gene expressions, may have the representational power to find more predictive radiogenomic associations than pairwise or linear methods. |
Databáze: |
MEDLINE |
Externí odkaz: |
|