Color Fundus Photography and Deep Learning Applications in Alzheimer Disease.
Autor: | Dumitrascu OM; Department of Neurology, Mayo Clinic, Scottsdale, AZ; Department of Ophthalmology, Mayo Clinic, Scottsdale, AZ., Li X; School of Computed and Augmented Intelligence, Arizona State University, Tempe, AZ., Zhu W; School of Computed and Augmented Intelligence, Arizona State University, Tempe, AZ., Woodruff BK; Department of Neurology, Mayo Clinic, Scottsdale, AZ., Nikolova S; Department of Neurology, Mayo Clinic, Scottsdale, AZ., Sobczak J; Department of Neurology, Mayo Clinic, Scottsdale, AZ., Youssef A; Department of Neurology, Mayo Clinic, Scottsdale, AZ., Saxena S; Department of Neurology, Mayo Clinic, Scottsdale, AZ., Andreev J; Department of Neurology, Mayo Clinic, Scottsdale, AZ., Caselli RJ; Department of Neurology, Mayo Clinic, Scottsdale, AZ., Chen JJ; Department of Ophthalmology, Mayo Clinic Rochester, MN; Department of Neurology, Mayo Clinic Rochester, MN., Wang Y; School of Computed and Augmented Intelligence, Arizona State University, Tempe, AZ. |
---|---|
Jazyk: | angličtina |
Zdroj: | Mayo Clinic proceedings. Digital health [Mayo Clin Proc Digit Health] 2024 Dec; Vol. 2 (4), pp. 548-558. Date of Electronic Publication: 2024 Aug 26. |
DOI: | 10.1016/j.mcpdig.2024.08.005 |
Abstrakt: | Objective: To report the development and performance of 2 distinct deep learning models trained exclusively on retinal color fundus photographs to classify Alzheimer disease (AD). Patients and Methods: Two independent datasets (UK Biobank and our tertiary academic institution) of good-quality retinal photographs derived from patients with AD and controls were used to build 2 deep learning models, between April 1, 2021, and January 30, 2024. ADVAS is a U-Net-based architecture that uses retinal vessel segmentation. ADRET is a bidirectional encoder representations from transformers style self-supervised learning convolutional neural network pretrained on a large data set of retinal color photographs from UK Biobank. The models' performance to distinguish AD from non-AD was determined using mean accuracy, sensitivity, specificity, and receiving operating curves. The generated attention heatmaps were analyzed for distinctive features. Results: The self-supervised ADRET model had superior accuracy when compared with ADVAS, in both UK Biobank (98.27% vs 77.20%; P <.001) and our institutional testing data sets (98.90% vs 94.17%; P =.04). No major differences were noted between the original and binary vessel segmentation and between both eyes vs single-eye models. Attention heatmaps obtained from patients with AD highlighted regions surrounding small vascular branches as areas of highest relevance to the model decision making. Conclusion: A bidirectional encoder representations from transformers style self-supervised convolutional neural network pretrained on a large data set of retinal color photographs alone can screen symptomatic AD with high accuracy, better than U-Net-pretrained models. To be translated in clinical practice, this methodology requires further validation in larger and diverse populations and integrated techniques to harmonize fundus photographs and attenuate the imaging-associated noise. |
Databáze: | MEDLINE |
Externí odkaz: |