A combined convolutional and recurrent neural network for enhanced glaucoma detection.

Autor: Gheisari S; Vision Science Group, Graduate School of Health, University of Technology Sydney, Sydney, Australia. soheila.gheisari@uts.edu.au., Shariflou S; Vision Science Group, Graduate School of Health, University of Technology Sydney, Sydney, Australia., Phu J; Centre for Eye Health, School of Optometry and Vision Science, University of New South Wales, Sydney, Australia.; School of Optometry and Vision Science, University of New South Wales, Sydney, Australia., Kennedy PJ; Center for Artificial Intelligence, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, Australia., Agar A; Department of Ophthalmology, Prince of Wales Hospital, Sydney, Australia., Kalloniatis M; Centre for Eye Health, School of Optometry and Vision Science, University of New South Wales, Sydney, Australia.; School of Optometry and Vision Science, University of New South Wales, Sydney, Australia., Golzan SM; Vision Science Group, Graduate School of Health, University of Technology Sydney, Sydney, Australia.
Jazyk: angličtina
Zdroj: Scientific reports [Sci Rep] 2021 Jan 21; Vol. 11 (1), pp. 1945. Date of Electronic Publication: 2021 Jan 21.
DOI: 10.1038/s41598-021-81554-4
Abstrakt: Glaucoma, a leading cause of blindness, is a multifaceted disease with several patho-physiological features manifesting in single fundus images (e.g., optic nerve cupping) as well as fundus videos (e.g., vascular pulsatility index). Current convolutional neural networks (CNNs) developed to detect glaucoma are all based on spatial features embedded in an image. We developed a combined CNN and recurrent neural network (RNN) that not only extracts the spatial features in a fundus image but also the temporal features embedded in a fundus video (i.e., sequential images). A total of 1810 fundus images and 295 fundus videos were used to train a CNN and a combined CNN and Long Short-Term Memory RNN. The combined CNN/RNN model reached an average F-measure of 96.2% in separating glaucoma from healthy eyes. In contrast, the base CNN model reached an average F-measure of only 79.2%. This proof-of-concept study demonstrates that extracting spatial and temporal features from fundus videos using a combined CNN and RNN, can markedly enhance the accuracy of glaucoma detection.
Databáze: MEDLINE
Nepřihlášeným uživatelům se plný text nezobrazuje