Zobrazeno 1 - 10
of 143
pro vyhledávání: '"Gu, Jinjin"'
We introduce a novel Multi-modal Guided Real-World Face Restoration (MGFR) technique designed to improve the quality of facial image restoration from low-quality inputs. Leveraging a blend of attribute text prompts, high-quality reference images, and
Externí odkaz:
http://arxiv.org/abs/2410.04161
This paper introduces a novel approach that leverages Large Language Models (LLMs) and Generative Agents to enhance time series forecasting by reasoning across both text and time series data. With language as a medium, our method adaptively integrate
Externí odkaz:
http://arxiv.org/abs/2409.17515
Despite the tremendous success of deep models in various individual image restoration tasks, there are at least two major technical challenges preventing these works from being applied to real-world usages: (1) the lack of generalization ability and
Externí odkaz:
http://arxiv.org/abs/2408.15143
Autor:
Hu, Jinfan, Gu, Jinjin, Yu, Shiyao, Yu, Fanghua, Li, Zheyuan, You, Zhiyuan, Lu, Chaochao, Dong, Chao
Deep neural networks have significantly improved the performance of low-level vision tasks but also increased the difficulty of interpretability. A deep understanding of deep models is beneficial for both network design and practical reliability. To
Externí odkaz:
http://arxiv.org/abs/2407.19789
Autor:
Chen, Haoyu, Li, Wenbo, Gu, Jinjin, Ren, Jingjing, Chen, Sixiang, Ye, Tian, Pei, Renjing, Zhou, Kaiwen, Song, Fenglong, Zhu, Lei
Natural images captured by mobile devices often suffer from multiple types of degradation, such as noise, blur, and low light. Traditional image restoration methods require manual selection of specific tasks, algorithms, and execution sequences, whic
Externí odkaz:
http://arxiv.org/abs/2407.18035
With the rapid advancement of Vision Language Models (VLMs), VLM-based Image Quality Assessment (IQA) seeks to describe image quality linguistically to align with human expression and capture the multifaceted nature of IQA tasks. However, current met
Externí odkaz:
http://arxiv.org/abs/2405.18842
The success of large language models (LLMs) has fostered a new research trend of multi-modality large language models (MLLMs), which changes the paradigm of various fields in computer vision. Though MLLMs have shown promising results in numerous high
Externí odkaz:
http://arxiv.org/abs/2405.15734
Autor:
Chen, Haoyu, Li, Wenbo, Gu, Jinjin, Ren, Jingjing, Sun, Haoze, Zou, Xueyi, Zhang, Zhensong, Yan, Youliang, Zhu, Lei
For image super-resolution (SR), bridging the gap between the performance on synthetic datasets and real-world degradation scenarios remains a challenge. This work introduces a novel "Low-Res Leads the Way" (LWay) training framework, merging Supervis
Externí odkaz:
http://arxiv.org/abs/2403.02601
Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild
Autor:
Yu, Fanghua, Gu, Jinjin, Li, Zheyuan, Hu, Jinfan, Kong, Xiangtao, Wang, Xintao, He, Jingwen, Qiao, Yu, Dong, Chao
We introduce SUPIR (Scaling-UP Image Restoration), a groundbreaking image restoration method that harnesses generative prior and the power of model scaling up. Leveraging multi-modal techniques and advanced generative prior, SUPIR marks a significant
Externí odkaz:
http://arxiv.org/abs/2401.13627
We introduce a Depicted image Quality Assessment method (DepictQA), overcoming the constraints of traditional score-based methods. DepictQA allows for detailed, language-based, human-like evaluation of image quality by leveraging Multi-modal Large La
Externí odkaz:
http://arxiv.org/abs/2312.08962