Exploring dermoscopic structures for melanoma lesions' classification.
Autor: | Malik FS; Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan., Yousaf MH; Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan.; School of Computing, College of Science, Engineering and Technology, University of South Africa (UNISA), Pretoria, South Africa., Sial HA; Barcelona Institute of Global Health (ISGlobal), Barcelona, Spain., Viriri S; School of Computing, College of Science, Engineering and Technology, University of South Africa (UNISA), Pretoria, South Africa.; School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa. |
---|---|
Jazyk: | angličtina |
Zdroj: | Frontiers in big data [Front Big Data] 2024 Mar 25; Vol. 7, pp. 1366312. Date of Electronic Publication: 2024 Mar 25 (Print Publication: 2024). |
DOI: | 10.3389/fdata.2024.1366312 |
Abstrakt: | Background: Melanoma is one of the deadliest skin cancers that originate from melanocytes due to sun exposure, causing mutations. Early detection boosts the cure rate to 90%, but misclassification drops survival to 15-20%. Clinical variations challenge dermatologists in distinguishing benign nevi and melanomas. Current diagnostic methods, including visual analysis and dermoscopy, have limitations, emphasizing the need for Artificial Intelligence understanding in dermatology. Objectives: In this paper, we aim to explore dermoscopic structures for the classification of melanoma lesions. The training of AI models faces a challenge known as brittleness, where small changes in input images impact the classification. A study explored AI vulnerability in discerning melanoma from benign lesions using features of size, color, and shape. Tests with artificial and natural variations revealed a notable decline in accuracy, emphasizing the necessity for additional information, such as dermoscopic structures. Methodology: The study utilizes datasets with clinically marked dermoscopic images examined by expert clinicians. Transformers and CNN-based models are employed to classify these images based on dermoscopic structures. Classification results are validated using feature visualization. To assess model susceptibility to image variations, classifiers are evaluated on test sets with original, duplicated, and digitally modified images. Additionally, testing is done on ISIC 2016 images. The study focuses on three dermoscopic structures crucial for melanoma detection: Blue-white veil, dots/globules, and streaks. Results: In evaluating model performance, adding convolutions to Vision Transformers proves highly effective for achieving up to 98% accuracy. CNN architectures like VGG-16 and DenseNet-121 reach 50-60% accuracy, performing best with features other than dermoscopic structures. Vision Transformers without convolutions exhibit reduced accuracy on diverse test sets, revealing their brittleness. OpenAI Clip, a pre-trained model, consistently performs well across various test sets. To address brittleness, a mitigation method involving extensive data augmentation during training and 23 transformed duplicates during test time, sustains accuracy. Conclusions: This paper proposes a melanoma classification scheme utilizing three dermoscopic structures across Ph2 and Derm7pt datasets. The study addresses AI susceptibility to image variations. Despite a small dataset, future work suggests collecting more annotated datasets and automatic computation of dermoscopic structural features. Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. (Copyright © 2024 Malik, Yousaf, Sial and Viriri.) |
Databáze: | MEDLINE |
Externí odkaz: |