A comparison of Monte Carlo dropout and bootstrap aggregation on the performance and uncertainty estimation in radiation therapy dose prediction with deep learning neural networks

Autor: Rafe McBeth, Steve B. Jiang, Anjali Balagopal, Gyanendra Bohara, Mu-Han Lin, Dan Nguyen, Azar Sadeghnejad Barkousaraie
Rok vydání: 2021
Předmět:
FOS: Computer and information sciences
Computer Science - Machine Learning
Computer science
Computer Vision and Pattern Recognition (cs.CV)
medicine.medical_treatment
Bootstrap aggregating
Monte Carlo method
Stability (learning theory)
Computer Science - Computer Vision and Pattern Recognition
FOS: Physical sciences
Machine learning
computer.software_genre
Radiation Dosage
Article
Machine Learning (cs.LG)
030218 nuclear medicine & medical imaging
03 medical and health sciences
0302 clinical medicine
Deep Learning
medicine
Humans
Radiology
Nuclear Medicine and imaging

Metaheuristic
Dropout (neural networks)
Radiological and Ultrasound Technology
Artificial neural network
business.industry
Deep learning
Radiotherapy Planning
Computer-Assisted

Uncertainty
Radiotherapy Dosage
Physics - Medical Physics
Radiation therapy
030220 oncology & carcinogenesis
Medical Physics (physics.med-ph)
Artificial intelligence
business
computer
Monte Carlo Method
Zdroj: Phys Med Biol
ISSN: 1361-6560
Popis: Recently, artificial intelligence technologies and algorithms have become a major focus for advancements in treatment planning for radiation therapy. As these are starting to become incorporated into the clinical workflow, a major concern from clinicians is not whether the model is accurate, but whether the model can express to a human operator when it does not know if its answer is correct. We propose to use Monte Carlo Dropout (MCDO) and the bootstrap aggregation (bagging) technique on deep learning (DL) models to produce uncertainty estimations for radiation therapy dose prediction. We show that both models are capable of generating a reasonable uncertainty map, and, with our proposed scaling technique, creating interpretable uncertainties and bounds on the prediction and any relevant metrics. Performance-wise, bagging provides statistically significant reduced loss value and errors in most of the metrics investigated in this study. The addition of bagging was able to further reduce errors by another 0.34% for D m e a n and 0.19% for D max , on average, when compared to the baseline model. Overall, the bagging framework provided significantly lower mean absolute error (MAE) of 2.62, as opposed to the baseline model’s MAE of 2.87. The usefulness of bagging, from solely a performance standpoint, does highly depend on the problem and the acceptable predictive error, and its high upfront computational cost during training should be factored in to deciding whether it is advantageous to use it. In terms of deployment with uncertainty estimations turned on, both methods offer the same performance time of about 12 s. As an ensemble-based metaheuristic, bagging can be used with existing machine learning architectures to improve stability and performance, and MCDO can be applied to any DL models that have dropout as part of their architecture.
Databáze: OpenAIRE