DiffIR2VR-Zero: Zero-Shot Video Restoration with Diffusion-based Image Restoration Models
Autor: | Yeh, Chang-Han, Lin, Chin-Yang, Wang, Zhixiang, Hsiao, Chi-Wei, Chen, Ting-Hsuan, Shiu, Hau-Shiang, Liu, Yu-Lun |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | This paper introduces a method for zero-shot video restoration using pre-trained image restoration diffusion models. Traditional video restoration methods often need retraining for different settings and struggle with limited generalization across various degradation types and datasets. Our approach uses a hierarchical token merging strategy for keyframes and local frames, combined with a hybrid correspondence mechanism that blends optical flow and feature-based nearest neighbor matching (latent merging). We show that our method not only achieves top performance in zero-shot video restoration but also significantly surpasses trained models in generalization across diverse datasets and extreme degradations (8$\times$ super-resolution and high-standard deviation video denoising). We present evidence through quantitative metrics and visual comparisons on various challenging datasets. Additionally, our technique works with any 2D restoration diffusion model, offering a versatile and powerful tool for video enhancement tasks without extensive retraining. This research leads to more efficient and widely applicable video restoration technologies, supporting advancements in fields that require high-quality video output. See our project page for video results and source code at https://jimmycv07.github.io/DiffIR2VR_web/. Comment: Project page: https://jimmycv07.github.io/DiffIR2VR_web/ |
Databáze: | arXiv |
Externí odkaz: |