Abstrakt: |
Existing methods for dehazing remote sensing (RS) images using deep learning have typically relied on convolutional frameworks. However, the limitations inherent in convolution, such as local receptive fields and independent input elements, hinder the network's ability to grasp long-range dependencies and nonuniform distributions. Consequently, the network is constrained in its capacity to learn these aspects. In response to this challenge, a proficient architecture for enhancing the clarity of remote sensing images through transformation, labeled RSDformer, has been designed. The architecture is structured to tackle the non-regular formations and varied spreads of hazing commonly found inside pictures of Remote Sensing. Emphasizing the importance of acquiring features from both nearby and distant areas, the design incorporates a novel detail-compensated transposed attention (DCTA) mechanism. This mechanism aims to get both localized and globalized dependency throughout channel. Furthermore, for enhancing model's capability towards learning from aspects that have undergone degradation and direct processes of restoring effectively, the DFBA or dualized frequencies blocks (adaptive) with filters of dynamic type has been developed. In the end DGBF or blocks of fusion that are dynamic has been devised to facilitate the effective fusion along with exchanging of aspects over differing levels. Through these innovations, these frameworks demonstrate robustness in their capability in capturing dependency in local regions and regions that are globalized, thereby enhancing restoration of visual information within the image. Wide-ranging experimental evaluations confirm superiorities of the proposed methodology over other competitive approaches. [ABSTRACT FROM AUTHOR] |