Cross-form efficient attention pyramidal network for semantic image segmentation
Autor: | Anamika Maurya, Satish Chand |
---|---|
Rok vydání: | 2022 |
Předmět: | |
Zdroj: | AI Communications. 35:225-242 |
ISSN: | 1875-8452 0921-7126 |
DOI: | 10.3233/aic-210266 |
Popis: | Although convolutional neural networks (CNNs) are leading the way in semantic segmentation, standard methods still have some flaws. First, there is feature redundancy and less discriminating feature representations. Second, the number of effective multi-scale features is limited. In this paper, we aim to solve these constraints with the proposed network that utilizes two effective pre-trained models as an encoder. We develop a cross-form attention pyramid that acquires semantically rich multi-scale information from local and global priors. A spatial-wise attention module is introduced to further enhance the segmentation findings. It highlights more discriminating regions of low-level features to focus on significant location information. We demonstrate the efficacy of the proposed network on three datasets, including IDD Lite, PASCAL VOC 2012, and CamVid. Our model achieves a mIoU score of 70.7% on the IDD Lite, 83.98% on the PASCAL VOC 2012, and 73.8% on the CamVid dataset. |
Databáze: | OpenAIRE |
Externí odkaz: | |
Nepřihlášeným uživatelům se plný text nezobrazuje | K zobrazení výsledku je třeba se přihlásit. |