Zobrazeno 1 - 10
of 14
pro vyhledávání: '"Ling, Pengyang"'
Images captured in hazy weather generally suffer from quality degradation, and many dehazing methods have been developed to solve this problem. However, single image dehazing problem is still challenging due to its ill-posed nature. In this paper, we
Externí odkaz:
http://arxiv.org/abs/2408.05683
Autor:
Ling, Pengyang, Bu, Jiazi, Zhang, Pan, Dong, Xiaoyi, Zang, Yuhang, Wu, Tong, Chen, Huaian, Wang, Jiaqi, Jin, Yi
Motion-based controllable text-to-video generation involves motions to control the video generation. Previous methods typically require the training of models to encode motion cues or the fine-tuning of video diffusion models. However, these approach
Externí odkaz:
http://arxiv.org/abs/2406.05338
Instruction-based image editing focuses on equipping a generative model with the capacity to adhere to human-written instructions for editing images. Current approaches typically comprehend explicit and specific instructions. However, they often exhi
Externí odkaz:
http://arxiv.org/abs/2405.11190
Autor:
Gu, Yuxuan, Jin, Yi, Wang, Ben, Wei, Zhixiang, Ma, Xiaoxiao, Ling, Pengyang, Wang, Haoxuan, Chen, Huaian, Chen, Enhong
In this work, we observe that the generators, which are pre-trained on massive natural images, inherently hold the promising potential for superior low-light image enhancement against varying scenarios.Specifically, we embed a pre-trained generator t
Externí odkaz:
http://arxiv.org/abs/2402.09694
Autor:
Ma, Xiaoxiao, Wei, Zhixiang, Jin, Yi, Ling, Pengyang, Liu, Tianle, Wang, Ben, Dai, Junkang, Chen, Huaian, Chen, Enhong
In this work, we observe that the model, which is trained on vast general images using masking strategy, has been naturally embedded with the distribution knowledge regarding natural images, and thus spontaneously attains the underlying potential for
Externí odkaz:
http://arxiv.org/abs/2401.14966
Autor:
Wei, Zhixiang, Chen, Lin, Jin, Yi, Ma, Xiaoxiao, Liu, Tianle, Ling, Pengyang, Wang, Ben, Chen, Huaian, Zheng, Jinjin
In this paper, we first assess and harness various Vision Foundation Models (VFMs) in the context of Domain Generalized Semantic Segmentation (DGSS). Driven by the motivation that Leveraging Stronger pre-trained models and Fewer trainable parameters
Externí odkaz:
http://arxiv.org/abs/2312.04265
Most prior semantic segmentation methods have been developed for day-time scenes, while typically underperforming in night-time scenes due to insufficient and complicated lighting conditions. In this work, we tackle this challenge by proposing a nove
Externí odkaz:
http://arxiv.org/abs/2307.09362
To serve the intricate and varied demands of image editing, precise and flexible manipulation in image content is indispensable. Recently, Drag-based editing methods have gained impressive performance. However, these methods predominantly center on p
Externí odkaz:
http://arxiv.org/abs/2307.04684
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
IEEE Transactions on Industrial Informatics; February 2024, Vol. 20 Issue: 2 p2177-2189, 13p