Abstrakt: |
In the past decade, the medical field is witnessing a greater need and the subsequent involvement of technology and automation. Skin cancer detection is one particular area in the medical field that is in dire need of such automation because day by day, the complexity of skin cancer is increasing and it is becoming more difficult to depend on specialized doctors at all instances. The aim of this work is to create a skin cancer detection system using DICOM images and multi-column convolutional neural networks (CNNs). DICOM dataset was chosen because it contains additional information about a patient which is not available in raw images, like age, gender, special medical conditions, time period, etc. This work uses the ISIC 2020 dataset which contains DICOM formatted data of cancerous and non-cancerous images of the outer skin. The images are trained in a separate multi-column CNN architecture, and simultaneously the additional information, or tag information, is trained in a separate dense network, then the parameters from these two sub-models are concatenated to give a single prediction. For comparison purpose, three other models were developed, one based on raw images alone, one based on preprocessed images and one based on preprocessed images alone with a multi-column CNN network. The following evaluation metrics are recorded and compared for all models: accuracy (training, validation and testing), F1-score, specificity and root-mean-square error (RMSE). Additionally, a comparative analysis is carried out with a previous work on the same field published in 2019 that utilized a MobileNet architecture. The proposed model achieved high accuracy when it is compared with other models. Moreover, the model also provide the results for a tenfold cross-validation carried out and to highlight the specific data split that yields the best results for training, validation and testing accuracies. [ABSTRACT FROM AUTHOR] |