Autor: |
Bal-Ghaoui, Mohamed, El Yousfi Alaoui, Moulay Hachem, Jilbab, Abdelilah, Bourouhou, Abdennacer |
Předmět: |
|
Zdroj: |
International Review on Modelling & Simulations; May2022, Vol. 15 Issue 3, p146-153, 8p |
Abstrakt: |
Breast cancer is one of the leading causes of death worldwide. Ultrasound images can be seen as an extremely convenient way to diagnose breast tumors, as it can be used alone or in conjunction with a mammogram to determine the nature of a breast lesion. It is a non-ionizing technique, harmless, and very accessible. In the era of artificial intelligence, Diagnosis Aided Systems (CAD) are mostly used to help smart radiologists' interpretation. For this purpose, this manuscript implements two common Deep Learning (DL) approaches to Breast Ultrasound (BUS) images and evaluates their performances in comparison to state-of-the-art results. Two publicly ultrasound breast cancer datasets have been used to compile the final classification model. Dataset consists of a total of 897 images, 537, and 360 images labelled as benign and malignant respectively. Promising results have been obtained from both DL techniques. The custom CNN classifier model has scored 92.53% accuracy and 90.83% sensitivity with a false-positive rate of 6.33%. For Transfer Learning (TL) approach, well-known pre-trained models have been used as feature extractors in addition to a basic classifier built on top. InceptionResNetV2 has been the best-scoring model for this approach, with 91.19% accuracy, 86.38% sensitivity, and a very low false-positive rate of 4.28%. VGG16 and InceptionV3 models also seem to outperform a study in the literature. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|