Popis: |
This thesis provides a novel solution to data augmentation on multi-modality medical image datasets and introduces a way of improving such method through introducing a manual-crafted intermediate modality. In research topic I, we introduce a novel multi-branch structure, Split-U net, based on the state-of-the-art feature learning framework of generative adversarial network (GAN). This is especially meaningful because most of current data augmentation methods only focus on single-modality transfer and are not adapted to augmenting multi-modality datasets. Our architecture is designed to jointly learn complementary multi-modality PET-CT image features. We evaluated our method on one PET-CT soft tissue sarcoma dataset. Our results show that our multi-modality synthetic images achieve higher image quality. Also, they augment the training dataset, which improves PET-CT tumour segmentation performance without using additionally annotated data. In research topic II, we improved the above method by introducing an auxiliary intermediate modality called ‘PET-CT fusion’. We provides experience on better qualitative control in designing a framework for a multi-modality learning. By synthesizing hand-crafted modality, we set out a way to minimize the distance between two modalities. Such ‘PET-CT fusion’ branch can make our synthetic images more stable. Compared with our Split-U net, it achieves higher synthetic image qualities and more effectiveness in data augmentation. Based on above projects, our experimental results demonstrate that a multi-branch structure is better than a single-branch to produce synthetic data with better image quality and more effectiveness in data augmentation. Besides, through using an intermediate modality, we can make cross-modality learning more stable and produce better synthetic images. These findings help us to understand better how multi-modality image synthesis works and how to improve them. |