MRI and PET image fusion using the nonparametric density model and the theory of variable-weight
Autor: | Kaifeng Xue, Yuqing Song, Kai Yang, Charlie Maere, Victor S. Sheng, Zhe Liu, Chunyan Xu |
---|---|
Rok vydání: | 2019 |
Předmět: |
Computer science
Gaussian Normal Distribution Health Informatics 030218 nuclear medicine & medical imaging 03 medical and health sciences symbols.namesake 0302 clinical medicine Alzheimer Disease Fluorodeoxyglucose F18 Image Interpretation Computer-Assisted medicine Humans Image resolution Brain Mapping Principal Component Analysis Image fusion medicine.diagnostic_test Brain Neoplasms Estimation theory business.industry Brain Pattern recognition Magnetic resonance imaging Magnetic Resonance Imaging Contourlet Computer Science Applications Positron emission tomography Positron-Emission Tomography symbols RGB color model Artificial intelligence business Algorithms 030217 neurology & neurosurgery Software |
Zdroj: | Computer Methods and Programs in Biomedicine. 175:73-82 |
ISSN: | 0169-2607 |
DOI: | 10.1016/j.cmpb.2019.04.010 |
Popis: | Medical image fusion is important in the field of clinical diagnosis because it can improve the availability of information contained in images. Magnetic Resonance Imaging (MRI) provides excellent anatomical details as well as functional information on regional changes in physiology, hemodynamics, and tissue composition. In contrast, although the spatial resolution of Positron Emission Tomography (PET) provides is lower than that an MRI, PET is capable of depicting the tissue's molecular and pathological activities that are not available from MRI. Fusion of MRI and PET may allow us to combine the advantages of both imaging modalities and achieve more precise localization and characterization of abnormalities. Previous image fusion algorithms, based on the estimation theory, assume that all distortions follow Gaussian distribution and are therefore susceptible to the model mismatch problem. To overcome this mismatch problem, we propose a new image fusion method with multi-resolution and nonparametric density models (MRNDM). The RGB space registered from the source multi-modal medical images is first transformed into a generalized intensity-hue-saturation space (GIHS), and then is decomposed into the low- and high-frequency components using the non-subsampled contourlet transform (NSCT). Two different fusion rules, which are based on the nonparametric density model and the theory of variable-weight, are developed and used to fuse low- and high-frequency coefficients. The fused images are constructed by performing the inverse of the NSCT operation with all composite coefficients. Our experimental results demonstrate that the quality of images fused from PET and MRI brain images using our proposed method MRNDM is higher than that of those fused using six previous fusion methods. |
Databáze: | OpenAIRE |
Externí odkaz: |