Autor: |
Lai, Fangwei, Lin, Jianpu |
Předmět: |
|
Zdroj: |
SID Symposium Digest of Technical Papers; Apr2024 Suppl 1, Vol. 55 Issue 1, p1191-1194, 4p |
Abstrakt: |
Low‐light image enhancement (LLIE) aims to recover high‐ quality images from low‐quality images acquired in dimly illuminated scenes. However, deep learning algorithms often struggle with uneven exposure and obscured texture features. To address these obstacles, we propose a simple but novel Transformer structure for LLIE, called Double Collapse Transformer (DCTFormer), which has the advantage of modeling non‐local self‐attention and capturing long‐range dependencies easily. The core of DCTFormer is multiple CT blocks composed of pixel‐level spaces and channel self‐attention, which can effectively aggregate features in both the space and channel dimensions. Furthermore, we design the Local Processing Unit (LPU) and an Inverted Residual Feed‐Forward Module (IRFFN) to further enhance the model's ability to learn effective features from current local information. DCTFormer adopts a high‐resolution preservation mechanism overall and gradually integrates deep features to achieve intra‐block feature aggregation. Experimental results on existing benchmark datasets demonstrate the superiority of the proposed DCTFormer. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|