A deep learning-based ADRPPA algorithm for the prediction of diabetic retinopathy progression.

Autor: Wang VY; Department of Ophthalmology, Keck School of Medicine, USC Roski Eye Institute, University of Southern California, Los Angeles, CA, USA., Lo MT; Department of Biomedical Sciences and Engineering, National Central University, Research Center Building 3, Room 404, 300 Zhongda Rd, Zhong-Li, Taoyuan, Taiwan., Chen TC; Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan.; Center of Frontier Medicine, National Taiwan University Hospital, Taipei, Taiwan., Huang CH; Department of Ophthalmology, Cathay General Hospital, Taipei, Taiwan., Huang A; Department of Biomedical Sciences and Engineering, National Central University, Research Center Building 3, Room 404, 300 Zhongda Rd, Zhong-Li, Taoyuan, Taiwan. adamhuan@gmail.com., Wang PC; Department of Medical Research, Cathay General Hospital, 280 Jen-Ai Rd. Sec.4 106, Taipei, Taiwan. pachunwang@cgh.org.tw.; Fu-Jen Catholic University School of Medicine, New Taipei City, Taiwan. pachunwang@cgh.org.tw.; Department of Medical Research, China Medical University Hospital, Taichung, Taiwan. pachunwang@cgh.org.tw.
Jazyk: angličtina
Zdroj: Scientific reports [Sci Rep] 2024 Dec 30; Vol. 14 (1), pp. 31772. Date of Electronic Publication: 2024 Dec 30.
DOI: 10.1038/s41598-024-82884-9
Abstrakt: As an alternative to assessments performed by human experts, artificial intelligence (AI) is currently being used for screening fundus images and monitoring diabetic retinopathy (DR). Although AI models can provide quasi-clinician diagnoses, they rarely offer new insights to assist clinicians in predicting disease prognosis and treatment response. Using longitudinal retinal imaging data, we developed and validated a predictive model for DR progression: AI-driven Diabetic Retinopathy Progression Prediction Algorithm (ADRPPA). In this retrospective study, we analyzed paired retinal fundus images of the same eye captured at ≥ 1-year intervals. The analysis was performed using the EyePACS dataset. By analyzing 12,768 images from 6384 eyes (2 images/eye, taken 733 ± 353 days apart), each annotated with DR severity grades, we trained the neural network ResNeXt to automatically determine DR severity. EyePACS data corresponding to 5108 (80%), 639 (10%), and 637 (10%) eyes were used for model training, validation, and testing, respectively. We further used an independent e-ophtha dataset comprising 148 images annotated with microaneurysms, 118 (75%) and 30 (25%) of which were used for training and validation, respectively. This dataset was used to train the neural network Mask Region-based Convolutional Neural Network (Mask-RCNN) for quantifying microaneurysms. The DR and microaneurysm scores from the first nonreferable DR (NRDR) image of each eye were used to predict progression to referable DR (RDR) in the second image. The area under the receiver operating characteristic curve values indicating our model's performance in diagnosing RDR were 0.963, 0.970, 0.968, and 0.971 for the trained ResNeXt models with input image resolutions of 256 × 256, 512 × 512, 768 × 768, and 1024 × 1024 pixels, respectively. In the validation of the Mask-RCNN model trained on the e-ophtha dataset resized to 1600 pixels in height, the recall, precision, and F1-score values for detecting individual microaneurysms were 0.786, 0.615, and 0.690, respectively. The best model combination for predicting NRDR-to-RDR progression included the 768-pixel ResNeXt and 1600-pixel Mask-RCNN models; this combination achieved recall, precision, and F1-scores of 0.338 (95% confidence interval [CI]: 0.228-0.451), 0.561 (95% CI: 0.405-0.714), and 0.422 (95% CI: 0.299-0.532), respectively. Thus, deep learning models can be trained on longitudinal retinal imaging data to predict NRDR-to-RDR progression. Furthermore, DR and microaneurysm scores generated from low- and high-resolution fundus images, respectively, can help identify patients at a high risk of NRDR, facilitating timely treatment.
Competing Interests: Declarations. Competing interests: The authors declare no competing interests.
(© 2024. The Author(s).)
Databáze: MEDLINE