Latent space manipulation for high-resolution medical image synthesis via the StyleGAN
Autor: | G. Heilemann, Tommy Löfstedt, Mikael Bylund, Dietmar Georg, Peter Kuess, Tufve Nyholm, L. Fetty |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
Mean squared error
Computer science Biophysics Signal-To-Noise Ratio Latent space 030218 nuclear medicine & medical imaging 03 medical and health sciences 0302 clinical medicine Position (vector) Datorseende och robotik (autonoma system) Image Processing Computer-Assisted medicine Humans Radiology Nuclear Medicine and imaging Computer Vision and Robotics (Autonomous Systems) Modality (human–computer interaction) Radiological and Ultrasound Technology medicine.diagnostic_test business.industry Deep learning Magnetic resonance imaging Pattern recognition Image synthesis StyleGAN Magnetic Resonance Imaging Sample (graphics) Feature (computer vision) Artificial intelligence Radiologi och bildbehandling Tomography X-Ray Computed business Algorithms Radiology Nuclear Medicine and Medical Imaging |
Popis: | Introduction This paper explores the potential of the StyleGAN model as an high-resolution image generator for synthetic medical images. The possibility to generate sample patient images of different modalities can be helpful for training deep learning algorithms as e.g. a data augmentation technique. Methods The StyleGAN model was trained on Computed Tomography (CT) and T2- weighted Magnetic Resonance (MR) images from 100 patients with pelvic malignancies. The resulting model was investigated with regards to three features: Image Modality, Sex, and Longitudinal Slice Position. Further, the style transfer feature of the StyleGAN was used to move images between the modalities. The root-mean-squard error (RMSE) and the Mean Absolute Error (MAE) were used to quantify errors for MR and CT, respectively. Results We demonstrate how these features can be transformed by manipulating the latent style vectors, and attempt to quantify how the errors change as we move through the latent style space. The best results were achieved by using the style transfer feature of the StyleGAN (58.7 HU MAE for MR to CT and 0.339 RMSE for CT to MR). Slices below and above an initial central slice can be predicted with an error below 75 HU MAE and 0.3 RMSE within 4 cm for CT and MR, respectively. Discussion The StyleGAN is a promising model to use for generating synthetic medical images for MR and CT modalities as well as for 3D volumes. |
Databáze: | OpenAIRE |
Externí odkaz: |