Autor: |
Yang Li, Jian Luo, Haoyu Long, Qianqian Jin |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
IEEE Access, Vol 12, Pp 173076-173090 (2024) |
Druh dokumentu: |
article |
ISSN: |
2169-3536 |
DOI: |
10.1109/ACCESS.2024.3501412 |
Popis: |
In hyperspectral image (HSI) classification, combining the strengths of convolutional neural networks (CNNs) and Transformers can significantly enhance classification performance and model robustness. However, neural networks that combine CNNs and Transformers face classification accuracy and generalization limitations when dealing with imbalanced class samples, particularly in few-shot training scenarios. To solve the above problems, we propose a multi-scale spatial perception attention network (Ms-SPA) for few-shot HSI classification in this article. This method is based on an encoder-decoder fully convolutional network (FCN) architecture, where the encoder combines a convolutional neural network (CNN) with a Transformer module to extract local and global spatial-spectral joint features simultaneously. In the encoder, the spatial contraction perception Transformer (SCPFormer) is first proposed to improve the model’s capacity for perceiving global-local joint features. Next, the multi-scale spatial attention (MSSA) module is proposed to capture spatial information at different convolution kernel scales and cascade them to form a more comprehensive representation structure. In the decoder, adaptive residual aggregation (ARA) is proposed to embed high-level semantic information into low-level features using a residual structure, thereby enhancing the perception of contextual information. A weighted CL-MixedLoss function (CL-MixedLoss) is proposed to solve the problem of imbalanced heterogeneous pixels in HSIs. Experimental results on three renowned HSI datasets indicate that our model achieves optimal classification performance, exceeding 95%, even when trained with a limited number of class samples. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|