A joint image super‐resolution network for multiple degradations removal via complementary transformer and convolutional neural network
Autor: | Guoping Li, Zhenting Zhou, Guozhong Wang |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2024 |
Předmět: | |
Zdroj: | IET Image Processing, Vol 18, Iss 5, Pp 1344-1357 (2024) |
Druh dokumentu: | article |
ISSN: | 1751-9667 1751-9659 |
DOI: | 10.1049/ipr2.13030 |
Popis: | Abstract While recent years have witnessed the unprecedented success of deep convolutional neural networks (CNNs) and vision transformers in single‐image super‐resolution (SISR), the degradation assumptions are simple and usually bicubic downsampling. Thus, their performances will drop dramatically when the actual degradation does not match this assumption, and they lack the capability to handle multiple degradations (e.g. Gaussian noise, bicubic downsizing, and salt & pepper noise). To address the issues, in this paper, the authors propose a joint SR model (JIRSR) that can effectively handle multiple degradations in a single model. Specifically, the authors build the parallel Transformer and CNN branches that complement each other through bidirectional feature fusion. Moreover, the authors also adopt a random permutation of different kinds of noise and resizing operations to build the training datasets. Extensive experiments on classical SR, denoising, and multiple degradation removal demonstrate that the authors’ JIRSR achieves state‐of‐the‐art (SOTA) performance on public benchmarks. Concretely, the authors’ JIRSR outperforms the second‐best model by 0.23 to 0.74 dB for multiple degradations removal and is 0.20 to 0.36 dB higher than the SOTA methods on the Urban100 dataset under the ×4 SR task. |
Databáze: | Directory of Open Access Journals |
Externí odkaz: |