Autor: |
Xinkun Tang, Ying Xu, Feng Ouyang, Ligu Zhu |
Jazyk: |
angličtina |
Rok vydání: |
2023 |
Předmět: |
|
Zdroj: |
Applied Sciences, Vol 13, Iss 18, p 10165 (2023) |
Druh dokumentu: |
article |
ISSN: |
2076-3417 |
DOI: |
10.3390/app131810165 |
Popis: |
Super-resolution in image and video processing has been a challenge in computer vision, with its progression creating substantial societal ramifications. More specifically, video super-resolution methodologies aim to restore spatial details while upholding the temporal coherence among frames. Nevertheless, their extensive parameter counts and high demand for computational resources challenge the deployment of existing deep convolutional neural networks on mobile platforms. In response to these concerns, our research undertakes an in-depth investigation into deep convolutional neural networks and offers a lightweight model for video super-resolution, capable of reducing computational load. In this study, we bring forward a unique lightweight model for video super-resolution, the Deep Residual Recursive Network (DRRN). The model applies residual learning to stabilize the Recurrent Neural Network (RNN) training, meanwhile adopting depth-wise separable convolution to boost the efficiency of super-resolution operations. Thorough experimental evaluations reveal that our proposed model excels in computational efficiency and in generating refined and temporally consistent results for video super-resolution. Hence, this research presents a crucial stride toward applying video super-resolution strategies on devices with resource limitations. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|