Development of prediction models for one-year brain tumour survival using machine learning: a comparison of accuracy and interpretability.
Autor: | Charlton CE; Artificial Intelligence and its Applications Institute, School of Informatics, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, UK. Electronic address: colleen.charlton@camh.ca., Poon MTC; Cancer Research UK Brain Tumour Centre of Excellence, CRUK Edinburgh Centre, University of Edinburgh, Edinburgh, UK; Department of Clinical Neuroscience, Royal Infirmary of Edinburgh, 51 Little France Crescent EH16 4SA, UK.; Translational Neurosurgery, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK; Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, UK., Brennan PM; Cancer Research UK Brain Tumour Centre of Excellence, CRUK Edinburgh Centre, University of Edinburgh, Edinburgh, UK; Department of Clinical Neuroscience, Royal Infirmary of Edinburgh, 51 Little France Crescent EH16 4SA, UK.; Translational Neurosurgery, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK., Fleuriot JD; Artificial Intelligence and its Applications Institute, School of Informatics, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, UK. |
---|---|
Jazyk: | angličtina |
Zdroj: | Computer methods and programs in biomedicine [Comput Methods Programs Biomed] 2023 May; Vol. 233, pp. 107482. Date of Electronic Publication: 2023 Mar 13. |
DOI: | 10.1016/j.cmpb.2023.107482 |
Abstrakt: | Background and Objective: Prediction of survival in patients diagnosed with a brain tumour is challenging because of heterogeneous tumour behaviours and treatment response. Advances in machine learning have led to the development of clinical prognostic models, but due to the lack of model interpretability, integration into clinical practice is almost non-existent. In this retrospective study, we compare five classification models with varying degrees of interpretability for the prediction of brain tumour survival greater than one year following diagnosis. Methods: 1028 patients aged ≥16 years with a brain tumour diagnosis between April 2012 and April 2020 were included in our study. Three intrinsically interpretable 'glass box' classifiers (Bayesian Rule Lists [BRL], Explainable Boosting Machine [EBM], and Logistic Regression [LR]), and two 'black box' classifiers (Random Forest [RF] and Support Vector Machine [SVM]) were trained on electronic patients records for the prediction of one-year survival. All models were evaluated using balanced accuracy (BAC), F1-score, sensitivity, specificity, and receiver operating characteristics. Black box model interpretability and misclassified predictions were quantified using SHapley Additive exPlanations (SHAP) values and model feature importance was evaluated by clinical experts. Results: The RF model achieved the highest BAC of 78.9%, closely followed by SVM (77.7%), LR (77.5%) and EBM (77.1%). Across all models, age, diagnosis (tumour type), functional features, and first treatment were top contributors to the prediction of one year survival. We used EBM and SHAP to explain model misclassifications and investigated the role of feature interactions in prognosis. Conclusion: Interpretable models are a natural choice for the domain of predictive medicine. Intrinsically interpretable models, such as EBMs, may provide an advantage over traditional clinical assessment of brain tumour prognosis by weighting potential risk factors and their interactions that may be unknown to clinicians. An agreement between model predictions and clinical knowledge is essential for establishing trust in the models decision making process, as well as trust that the model will make accurate predictions when applied to new data. Competing Interests: Declaration of Competing Interest The authors declare no conflict of interest. (Copyright © 2023. Published by Elsevier B.V.) |
Databáze: | MEDLINE |
Externí odkaz: |