Abstrakt: |
Classification of images is an important process in the revolution of big data in healthcare. For classification and diagnosis, several developments have considerably improved digital clinical image processing. In many applications of medical imaging, medical image classification is a very essential task. Convolutional Neural Networks (CNNs) have displayed better performance in the classification of images for medical systems. However, CNN and conventional standardized classifiers suffer limitations in their performance due to a few reliability concerns, such as overfitting issues, feature extraction inefficiencies, and computational complexity. Therefore, a novel approach to medical image classification is proposed in this paper employing a three-tiered model that differs from conventional frameworks of multi-class classification to overcome these problems. In the first tier, the preparation of data includes the collection and transformation of five various clinical types of datasets such as Octoscope, Skin Cancer (PAD-UFES-20), The Kvasir dataset, Covid-19 dataset, and Chest X-Ray Images (Pneumonia). The stage of pre-processing guarantees the raw data is cleansed and organized for efficient analysis and training. In the second tier, sophisticated feature extraction utilizes a Multi-head Self-attention Progressive Learning Network on pre-processed data. The mechanism of Multi-head Self-attention and the techniques of Progressive Learning are leveraged to improve feature extraction, providing superior performance than traditional methods. In the third tier, the classification of features that are extracted is performed through Inception Residual Network-VGG19 (IRNet-VGG19), which combines the strengths of both Inception modules and the deep architecture of VGG19 to upgrade the accuracy of classification further. By evaluating all five datasets, the performance of IRNet-VGG19 shows better classification outcomes when compared with other existing approaches. The accuracies of classification on five different datasets are achieved as 0.993, 0.966, 0.994, 0.984, and 0.968 respectively, outperforming other challenging methods. [ABSTRACT FROM AUTHOR] |