A multi-scene deep learning model for automated segmentation of acute vertebral compression fractures from radiographs: a multicenter cohort study

Autor: Hao Zhang, Genji Yuan, Ziyue Zhang, Xiang Guo, Ruixiang Xu, Tongshuai Xu, Xin Zhong, Meng Kong, Kai Zhu, Xuexiao Ma
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: Insights into Imaging, Vol 15, Iss 1, Pp 1-11 (2024)
Druh dokumentu: article
ISSN: 1869-4101
DOI: 10.1186/s13244-024-01861-y
Popis: Abstract Objective To develop a multi-scene model that can automatically segment acute vertebral compression fractures (VCFs) from spine radiographs. Methods In this multicenter study, we collected radiographs from five hospitals (Hospitals A–E) between November 2016 and October 2019. The study included participants with acute VCFs, as well as healthy controls. For the development of the Positioning and Focus Network (PFNet), we used a training dataset consisting of 1071 participants from Hospitals A and B. The validation dataset included 458 participants from Hospitals A and B, whereas external test datasets 1–3 included 301 participants from Hospital C, 223 from Hospital D, and 261 from Hospital E, respectively. We evaluated the segmentation performance of the PFNet model and compared it with previously described approaches. Additionally, we used qualitative comparison and gradient-weighted class activation mapping (Grad-CAM) to explain the feature learning and segmentation results of the PFNet model. Results The PFNet model achieved accuracies of 99.93%, 98.53%, 99.21%, and 100% for the segmentation of acute VCFs in the validation dataset and external test datasets 1–3, respectively. The receiver operating characteristic curves comparing the four models across the validation and external test datasets consistently showed that the PFNet model outperformed other approaches, achieving the highest values for all measures. The qualitative comparison and Grad-CAM provided an intuitive view of the interpretability and effectiveness of our PFNet model. Conclusion In this study, we successfully developed a multi-scene model based on spine radiographs for precise preoperative and intraoperative segmentation of acute VCFs. Critical relevance statement Our PFNet model demonstrated high accuracy in multi-scene segmentation in clinical settings, making it a significant advancement in this field. Key Points This study developed the first multi-scene deep learning model capable of segmenting acute VCFs from spine radiographs. The model’s architecture consists of two crucial modules: an attention-guided module and a supervised decoding module. The exceptional generalization and consistently superior performance of our model were validated using multicenter external test datasets. Graphical Abstract
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje