A Deep Unsupervised Learning Model for Artifact Correction of Pelvis Cone-Beam CT

Autor: Guoya Dong, Chenglong Zhang, Xiaokun Liang, Lei Deng, Yulin Zhu, Xuanyu Zhu, Xuanru Zhou, Liming Song, Xiang Zhao, Yaoqin Xie
Jazyk: angličtina
Rok vydání: 2021
Předmět:
Zdroj: Frontiers in Oncology, Vol 11 (2021)
Druh dokumentu: article
ISSN: 2234-943X
DOI: 10.3389/fonc.2021.686875
Popis: PurposeIn recent years, cone-beam computed tomography (CBCT) is increasingly used in adaptive radiation therapy (ART). However, compared with planning computed tomography (PCT), CBCT image has much more noise and imaging artifacts. Therefore, it is necessary to improve the image quality and HU accuracy of CBCT. In this study, we developed an unsupervised deep learning network (CycleGAN) model to calibrate CBCT images for the pelvis to extend potential clinical applications in CBCT-guided ART.MethodsTo train CycleGAN to generate synthetic PCT (sPCT), we used CBCT and PCT images as inputs from 49 patients with unpaired data. Additional deformed PCT (dPCT) images attained as CBCT after deformable registration are utilized as the ground truth before evaluation. The trained uncorrected CBCT images are converted into sPCT images, and the obtained sPCT images have the characteristics of PCT images while keeping the anatomical structure of CBCT images unchanged. To demonstrate the effectiveness of the proposed CycleGAN, we use additional nine independent patients for testing.ResultsWe compared the sPCT with dPCT images as the ground truth. The average mean absolute error (MAE) of the whole image on testing data decreased from 49.96 ± 7.21HU to 14.6 ± 2.39HU, the average MAE of fat and muscle ROIs decreased from 60.23 ± 7.3HU to 16.94 ± 7.5HU, and from 53.16 ± 9.1HU to 13.03 ± 2.63HU respectively.ConclusionWe developed an unsupervised learning method to generate high-quality corrected CBCT images (sPCT). Through further evaluation and clinical implementation, it can replace CBCT in ART.
Databáze: Directory of Open Access Journals