Space-time video super-resolution using long-term temporal feature aggregation

Autor: Kuanhao Chen, Zijie Yue, Miaojing Shi
Jazyk: angličtina
Rok vydání: 2023
Předmět:
Zdroj: Autonomous Intelligent Systems, Vol 3, Iss 1, Pp 1-9 (2023)
Druh dokumentu: article
ISSN: 2730-616X
DOI: 10.1007/s43684-023-00051-9
Popis: Abstract Space-time video super-resolution (STVSR) serves the purpose to reconstruct high-resolution high-frame-rate videos from their low-resolution low-frame-rate counterparts. Recent approaches utilize end-to-end deep learning models to achieve STVSR. They first interpolate intermediate frame features between given frames, then perform local and global refinement among the feature sequence, and finally increase the spatial resolutions of these features. However, in the most important feature interpolation phase, they only capture spatial-temporal information from the most adjacent frame features, ignoring modelling long-term spatial-temporal correlations between multiple neighbouring frames to restore variable-speed object movements and maintain long-term motion continuity. In this paper, we propose a novel long-term temporal feature aggregation network (LTFA-Net) for STVSR. Specifically, we design a long-term mixture of experts (LTMoE) module for feature interpolation. LTMoE contains multiple experts to extract mutual and complementary spatial-temporal information from multiple consecutive adjacent frame features, which are then combined with different weights to obtain interpolation results using several gating nets. Next, we perform local and global feature refinement using the Locally-temporal Feature Comparison (LFC) module and bidirectional deformable ConvLSTM layer, respectively. Experimental results on two standard benchmarks, Adobe240 and GoPro, indicate the effectiveness and superiority of our approach over state of the art.
Databáze: Directory of Open Access Journals