Autor: |
Ade Irma Suryani, Chuan-Wang Chang, Yu-Fan Feng, Tin-Kwang Lin, Chih-Wen Lin, Jen-Chieh Cheng, Chuan-Yu Chang |
Jazyk: |
angličtina |
Rok vydání: |
2022 |
Předmět: |
|
Zdroj: |
IEEE Access, Vol 10, Pp 124448-124463 (2022) |
Druh dokumentu: |
article |
ISSN: |
2169-3536 |
DOI: |
10.1109/ACCESS.2022.3224486 |
Popis: |
Chest X-ray is a radiological clinical assessment tool that has been commonly used to detect different types of lung diseases, such as lung tumors. In this paper, we use the Segmentation-based Deep Fusion Networks and Squeeze and Excitation blocks for model training. The proposed approach uses both wholes and cropped lung X-ray images and adds an attention mechanism to address the problems encountered during lesion identification, such as image misalignments, possible false positives from irrelevant objects, and the loss of small objects after image resizing. Two CNNs are used for feature extraction, and the extracted features are stitched together to form the final output, which is used to determine the presence of lung tumors in the image. Unlike previous methods which identify lesion heatmaps from X-ray images, we use the Semantic Segmentation via Gradient-Weighted Class Activation Mapping (Seg-Grad-CAM) to add semantic data for improved lung tumor localization. Experimental results show that our method achieves 98.51% accuracy and 99.01% sensitivity for classifying chest X-ray images with and without lung tumors. Furthermore, we combine the Seg-Grad-CAM and semantic segmentation for feature visualization. Experimental results show that the proposed approach achieves better results than previous methods that use weakly supervised learning for localization. The method proposed in this paper reduces the errors caused by subjective differences among radiologists, improves the efficiency of image interpretation and facilitates the making of correct treatment decisions. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|