Zobrazeno 1 - 10
of 193
pro vyhledávání: '"Liu, Hanyuan"'
Assistive drawing aims to facilitate the creative process by providing intelligent guidance to artists. Existing solutions often fail to effectively model intricate stroke details or adequately address the temporal aspects of drawing. We introduce hy
Externí odkaz:
http://arxiv.org/abs/2408.09348
Autor:
Xing, Jinbo, Liu, Hanyuan, Xia, Menghan, Zhang, Yong, Wang, Xintao, Shan, Ying, Wong, Tien-Tsin
We introduce ToonCrafter, a novel approach that transcends traditional correspondence-based cartoon video interpolation, paving the way for generative interpolation. Traditional methods, that implicitly assume linear motion and the absence of complic
Externí odkaz:
http://arxiv.org/abs/2405.17933
Autor:
Liu, Hanyuan
The evolution of the marine atmospheric boundary layer (MABL) in the vicinity of a sea surface temperature (SST) front is of particular research interest, as the large air-sea temperature and humidity differences at the surface fuels various physical
Externí odkaz:
https://hdl.handle.net/1721.1/154364
Text-guided video-to-video stylization transforms the visual appearance of a source video to a different appearance guided on textual prompts. Existing text-guided image diffusion models can be extended for stylized video synthesis. However, they str
Externí odkaz:
http://arxiv.org/abs/2311.14343
This paper introduces a novel approach to synthesize texture to dress up a given 3D object, given a text prompt. Based on the pretrained text-to-image (T2I) diffusion model, existing methods usually employ a project-and-inpaint approach, in which a v
Externí odkaz:
http://arxiv.org/abs/2311.12891
ITM(inverse tone-mapping) converts SDR (standard dynamic range) footage to HDR/WCG (high dynamic range /wide color gamut) for media production. It happens not only when remastering legacy SDR footage in front-end content provider, but also adapting o
Externí odkaz:
http://arxiv.org/abs/2309.17160
Video colorization is a challenging task that involves inferring plausible and temporally consistent colors for grayscale frames. In this paper, we present ColorDiffuser, an adaptation of a pre-trained text-to-image latent diffusion model for video c
Externí odkaz:
http://arxiv.org/abs/2306.01732
Autor:
Xing, Jinbo, Xia, Menghan, Liu, Yuxin, Zhang, Yuechen, Zhang, Yong, He, Yingqing, Liu, Hanyuan, Chen, Haoxin, Cun, Xiaodong, Wang, Xintao, Shan, Ying, Wong, Tien-Tsin
Creating a vivid video from the event or scenario in our imagination is a truly fascinating experience. Recent advancements in text-to-video synthesis have unveiled the potential to achieve this with prompts only. While text is convenient in conveyin
Externí odkaz:
http://arxiv.org/abs/2306.00943
Image colorization has been attracting the research interests of the community for decades. However, existing methods still struggle to provide satisfactory colorized results given grayscale images due to a lack of human-like global understanding of
Externí odkaz:
http://arxiv.org/abs/2304.11105
Autor:
Chen, Daokun, Li, Xinbin, Wang, Zhanbin, Kang, Chengxin, He, Tao, Liu, Hanyuan, Jiang, Zhiyang, Xi, Junsheng, Zhang, Yao
Publikováno v:
In Heliyon 15 September 2024 10(17)