Comparing humans and deep learning performance for grading AMD: A study in using universal deep features and transfer learning for automated AMD analysis
Autor: | Neil M. Bressler, David E. Freund, Philippe Burlina, Neil Joshi, Katia D. Pacheco |
---|---|
Rok vydání: | 2016 |
Předmět: |
0301 basic medicine
genetic structures Computer science Health Informatics Fundus (eye) Convolutional neural network Sensitivity and Specificity Severity of Illness Index Article Pattern Recognition Automated Machine Learning 03 medical and health sciences Macular Degeneration 0302 clinical medicine Severity of illness Image Interpretation Computer-Assisted medicine Humans Fluorescein Angiography Grading (tumors) Observer Variation business.industry Deep learning Reproducibility of Results Macular degeneration medicine.disease eye diseases Retinal image Computer Science Applications 030104 developmental biology Early Diagnosis 030221 ophthalmology & optometry Optometry sense organs Artificial intelligence business Transfer of learning Algorithms |
Zdroj: | Computers in biology and medicine. 82 |
ISSN: | 1879-0534 |
Popis: | BackgroundWhen left untreated, age-related macular degeneration (AMD) is the leading cause of vision loss in people over fifty in the US. Currently it is estimated that about eight million US individuals have the intermediate stage of AMD that is often asymptomatic with regard to visual deficit. These individuals are at high risk for progressing to the advanced stage where the often treatable choroidal neovascular form of AMD can occur. Careful monitoring to detect the onset and prompt treatment of the neovascular form as well as dietary supplementation can reduce the risk of vision loss from AMD, therefore, preferred practice patterns recommend identifying individuals with the intermediate stage in a timely manner. MethodsPast automated retinal image analysis (ARIA) methods applied on fundus imagery have relied on engineered and hand-designed visual features. We instead detail the novel application of a machine learning approach using deep learning for the problem of ARIA and AMD analysis. We use transfer learning and universal features derived from deep convolutional neural networks (DCNN). We address clinically relevant 4-class, 3-class, and 2-class AMD severity classification problems. ResultsUsing 5664 color fundus images from the NIH AREDS dataset and DCNN universal features, we obtain values for accuracy for the (4-, 3-, 2-) class classification problem of (79.4%, 81.5%, 93.4%) for machine vs. (75.8%, 85.0%, 95.2%) for physician grading. DiscussionThis study demonstrates the efficacy of machine grading based on deep universal features/transfer learning when applied to ARIA and is a promising step in providing a pre-screener to identify individuals with intermediate AMD and also as a tool that can facilitate identifying such individuals for clinical studies aimed at developing improved therapies. It also demonstrates comparable performance between computer and physician grading. |
Databáze: | OpenAIRE |
Externí odkaz: |