Multilevel Deep Feature Generation Framework for Automated Detection of Retinal Abnormalities Using OCT Images

Autor: Prabal Datta Barua, Wai Yee Chan, Sengul Dogan, Mehmet Baygin, Turker Tuncer, Edward J. Ciaccio, Nazrul Islam, Kang Hao Cheong, Zakia Sultana Shahid, U. Rajendra Acharya
Jazyk: angličtina
Rok vydání: 2021
Předmět:
Zdroj: Entropy, Vol 23, Iss 12, p 1651 (2021)
Druh dokumentu: article
ISSN: 1099-4300
DOI: 10.3390/e23121651
Popis: Optical coherence tomography (OCT) images coupled with many learning techniques have been developed to diagnose retinal disorders. This work aims to develop a novel framework for extracting deep features from 18 pre-trained convolutional neural networks (CNN) and to attain high performance using OCT images. In this work, we have developed a new framework for automated detection of retinal disorders using transfer learning. This model consists of three phases: deep fused and multilevel feature extraction, using 18 pre-trained networks and tent maximal pooling, feature selection with ReliefF, and classification using the optimized classifier. The novelty of this proposed framework is the feature generation using widely used CNNs and to select the most suitable features for classification. The extracted features using our proposed intelligent feature extractor are fed to iterative ReliefF (IRF) to automatically select the best feature vector. The quadratic support vector machine (QSVM) is utilized as a classifier in this work. We have developed our model using two public OCT image datasets, and they are named database 1 (DB1) and database 2 (DB2). The proposed framework can attain 97.40% and 100% classification accuracies using the two OCT datasets, DB1 and DB2, respectively. These results illustrate the success of our model.
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje