Deep learning with uncertainty estimation for automatic tumor segmentation in PET/CT of head and neck cancers: impact of model complexity, image processing and augmentation.

Autor: Huynh BN; Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway., Groendahl AR; Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway.; Section of Oncology, Vestre Viken Hospital Trust, Drammen, Norway., Tomic O; Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway., Liland KH; Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway., Knudtsen IS; Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway., Hoebers F; Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Reproduction, Maastricht, Netherlands., van Elmpt W; Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Reproduction, Maastricht, Netherlands., Dale E; Department of Oncology, Oslo University Hospital, Oslo, Norway., Malinen E; Department of Medical Physics, Oslo University Hospital, Oslo, Norway.; Department of Physics, University of Oslo, Oslo, Norway., Futsaether CM; Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway.
Jazyk: angličtina
Zdroj: Biomedical physics & engineering express [Biomed Phys Eng Express] 2024 Aug 30; Vol. 10 (5). Date of Electronic Publication: 2024 Aug 30.
DOI: 10.1088/2057-1976/ad6dcd
Abstrakt: Objective. Target volumes for radiotherapy are usually contoured manually, which can be time-consuming and prone to inter- and intra-observer variability. Automatic contouring by convolutional neural networks (CNN) can be fast and consistent but may produce unrealistic contours or miss relevant structures. We evaluate approaches for increasing the quality and assessing the uncertainty of CNN-generated contours of head and neck cancers with PET/CT as input. Approach. Two patient cohorts with head and neck squamous cell carcinoma and baseline 18 F-fluorodeoxyglucose positron emission tomography and computed tomography images (FDG-PET/CT) were collected retrospectively from two centers. The union of manual contours of the gross primary tumor and involved nodes was used to train CNN models for generating automatic contours. The impact of image preprocessing, image augmentation, transfer learning and CNN complexity, architecture, and dimension (2D or 3D) on model performance and generalizability across centers was evaluated. A Monte Carlo dropout technique was used to quantify and visualize the uncertainty of the automatic contours. Main results . CNN models provided contours with good overlap with the manually contoured ground truth (median Dice Similarity Coefficient: 0.75-0.77), consistent with reported inter-observer variations and previous auto-contouring studies. Image augmentation and model dimension, rather than model complexity, architecture, or advanced image preprocessing, had the largest impact on model performance and cross-center generalizability. Transfer learning on a limited number of patients from a separate center increased model generalizability without decreasing model performance on the original training cohort. High model uncertainty was associated with false positive and false negative voxels as well as low Dice coefficients. Significance. High quality automatic contours can be obtained using deep learning architectures that are not overly complex. Uncertainty estimation of the predicted contours shows potential for highlighting regions of the contour requiring manual revision or flagging segmentations requiring manual inspection and intervention.
(© 2024 IOP Publishing Ltd. All rights, including for text and data mining, AI training, and similar technologies, are reserved.)
Databáze: MEDLINE