Towards optimal deep fusion of imaging and clinical data via a model-based description of fusion quality.
Autor: | Wang Y; Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA., Li X; Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA., Konanur M; Department of Radiology, Duke University, Durham, North Carolina, USA., Konkel B; Department of Radiology, Duke University, Durham, North Carolina, USA., Seyferth E; Department of Radiology, Duke University, Durham, North Carolina, USA., Brajer N; Department of Radiology, Duke University, Durham, North Carolina, USA., Liu JG; Department of Mathematics, Duke University, Durham, North Carolina, USA.; Department of Physics, Duke University, Durham, North Carolina, USA., Bashir MR; Department of Radiology, Duke University, Durham, North Carolina, USA.; Department of Medicine, Gastroenterology, Duke University, Durham, North Carolina, USA., Lafata KJ; Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA.; Department of Radiology, Duke University, Durham, North Carolina, USA.; Department of Radiation Oncology, Duke University, Durham, North Carolina, USA. |
---|---|
Jazyk: | angličtina |
Zdroj: | Medical physics [Med Phys] 2023 Jun; Vol. 50 (6), pp. 3526-3537. Date of Electronic Publication: 2023 Jan 07. |
DOI: | 10.1002/mp.16181 |
Abstrakt: | Background: Due to intrinsic differences in data formatting, data structure, and underlying semantic information, the integration of imaging data with clinical data can be non-trivial. Optimal integration requires robust data fusion, that is, the process of integrating multiple data sources to produce more useful information than captured by individual data sources. Here, we introduce the concept of fusion quality for deep learning problems involving imaging and clinical data. We first provide a general theoretical framework and numerical validation of our technique. To demonstrate real-world applicability, we then apply our technique to optimize the fusion of CT imaging and hepatic blood markers to estimate portal venous hypertension, which is linked to prognosis in patients with cirrhosis of the liver. Purpose: To develop a measurement method of optimal data fusion quality deep learning problems utilizing both imaging data and clinical data. Methods: Our approach is based on modeling the fully connected layer (FCL) of a convolutional neural network (CNN) as a potential function, whose distribution takes the form of the classical Gibbs measure. The features of the FCL are then modeled as random variables governed by state functions, which are interpreted as the different data sources to be fused. The probability density of each source, relative to the probability density of the FCL, represents a quantitative measure of source-bias. To minimize this source-bias and optimize CNN performance, we implement a vector-growing encoding scheme called positional encoding, where low-dimensional clinical data are transcribed into a rich feature space that complements high-dimensional imaging features. We first provide a numerical validation of our approach based on simulated Gaussian processes. We then applied our approach to patient data, where we optimized the fusion of CT images with blood markers to predict portal venous hypertension in patients with cirrhosis of the liver. This patient study was based on a modified ResNet-152 model that incorporates both images and blood markers as input. These two data sources were processed in parallel, fused into a single FCL, and optimized based on our fusion quality framework. Results: Numerical validation of our approach confirmed that the probability density function of a fused feature space converges to a source-specific probability density function when source data are improperly fused. Our numerical results demonstrate that this phenomenon can be quantified as a measure of fusion quality. On patient data, the fused model consisting of both imaging data and positionally encoded blood markers at the theoretically optimal fusion quality metric achieved an AUC of 0.74 and an accuracy of 0.71. This model was statistically better than the imaging-only model (AUC = 0.60; accuracy = 0.62), the blood marker-only model (AUC = 0.58; accuracy = 0.60), and a variety of purposely sub-optimized fusion models (AUC = 0.61-0.70; accuracy = 0.58-0.69). Conclusions: We introduced the concept of data fusion quality for multi-source deep learning problems involving both imaging and clinical data. We provided a theoretical framework, numerical validation, and real-world application in abdominal radiology. Our data suggests that CT imaging and hepatic blood markers provide complementary diagnostic information when appropriately fused. (© 2022 American Association of Physicists in Medicine.) |
Databáze: | MEDLINE |
Externí odkaz: |