Attention-guided video super-resolution with recurrent multi-scale spatial–temporal transformer

Autor: Wei Sun, Xianguang Kong, Yanning Zhang
Jazyk: angličtina
Rok vydání: 2022
Předmět:
Zdroj: Complex & Intelligent Systems, Vol 9, Iss 4, Pp 3989-4002 (2022)
Druh dokumentu: article
ISSN: 2199-4536
2198-6053
DOI: 10.1007/s40747-022-00944-x
Popis: Abstract Video super-resolution (VSR) aims to recover the high-resolution (HR) contents from the low-resolution (LR) observations relying on compositing the spatial–temporal information in the LR frames. It is crucial to propagate and aggregate spatial–temporal information. Recently, while transformers show impressive performance on high-level vision tasks, few attempts have been made on image restoration, especially on VSR. In addition, previous transformers simultaneously process spatial–temporal information, easily synthesizing confused textures and high computational cost limit its development. Towards this end, we construct a novel bidirectional recurrent VSR architecture. Our model disentangles the task of learning spatial–temporal information into two easier sub-tasks, each sub-task focuses on propagating and aggregating specific information with a multi-scale transformer-based design, which alleviates the difficulty of learning. Additionally, an attention-guided motion compensation module is applied to get rid of the influence of misalignment between frames. Experiments on three widely used benchmark datasets show that, relying on superior feature correlation learning, the proposed network can outperform previous state-of-the-art methods, especially for recovering the fine details.
Databáze: Directory of Open Access Journals