MR-Contrast-Aware Image-to-Image Translations with Generative Adversarial Networks
Autor: | Eva Rothgang, Jens Guehring, Jonas Denck, Andreas Maier |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Generative adversarial networks Computer science Computer Vision and Pattern Recognition (cs.CV) 0206 medical engineering Biomedical Engineering Computer Science - Computer Vision and Pattern Recognition Health Informatics 02 engineering and technology Signal-To-Noise Ratio Translation (geometry) 030218 nuclear medicine & medical imaging Image (mathematics) 03 medical and health sciences 0302 clinical medicine Magnetic resonance imaging Medical imaging Image Processing Computer-Assisted FOS: Electrical engineering electronic engineering information engineering Humans Radiology Nuclear Medicine and imaging Sequence Basis (linear algebra) business.industry Deep learning Image and Video Processing (eess.IV) Contrast (statistics) Pattern recognition General Medicine Image synthesis Electrical Engineering and Systems Science - Image and Video Processing 020601 biomedical engineering Computer Graphics and Computer-Aided Design Computer Science Applications Benchmark (computing) ddc:000 Surgery Original Article Computer Vision and Pattern Recognition Artificial intelligence business |
Zdroj: | International Journal of Computer Assisted Radiology and Surgery |
DOI: | 10.48550/arxiv.2104.01449 |
Popis: | Purpose A magnetic resonance imaging (MRI) exam typically consists of several sequences that yield different image contrasts. Each sequence is parameterized through multiple acquisition parameters that influence image contrast, signal-to-noise ratio, acquisition time, and/or resolution. Depending on the clinical indication, different contrasts are required by the radiologist to make a diagnosis. As MR sequence acquisition is time consuming and acquired images may be corrupted due to motion, a method to synthesize MR images with adjustable contrast properties is required. Methods Therefore, we trained an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time. Our approach is motivated by style transfer networks, whereas the “style” for an image is explicitly given in our case, as it is determined by the MR acquisition parameters our network is conditioned on. Results This enables us to synthesize MR images with adjustable image contrast. We evaluated our approach on the fastMRI dataset, a large set of publicly available MR knee images, and show that our method outperforms a benchmark pix2pix approach in the translation of non-fat-saturated MR images to fat-saturated images. Our approach yields a peak signal-to-noise ratio and structural similarity of 24.48 and 0.66, surpassing the pix2pix benchmark model significantly. Conclusion Our model is the first that enables fine-tuned contrast synthesis, which can be used to synthesize missing MR-contrasts or as a data augmentation technique for AI training in MRI. It can also be used as basis for other image-to-image translation tasks within medical imaging, e.g., to enhance intermodality translation (MRI → CT) or 7 T image synthesis from 3 T MR images. |
Databáze: | OpenAIRE |
Externí odkaz: |