Zobrazeno 1 - 10
of 12
pro vyhledávání: '"Chang, DI"'
With the success of 2D and 3D visual generative models, there is growing interest in generating 4D content. Existing methods primarily rely on text prompts to produce 4D content, but they often fall short of accurately defining complex or rare motion
Externí odkaz:
http://arxiv.org/abs/2405.14017
Human-human communication is like a delicate dance where listeners and speakers concurrently interact to maintain conversational dynamics. Hence, an effective model for generating listener nonverbal behaviors requires understanding the dyadic context
Externí odkaz:
http://arxiv.org/abs/2403.09069
Autor:
Gu, Yuming, Xie, You, Xu, Hongyi, Song, Guoxian, Shi, Yichun, Chang, Di, Yang, Jing, Luo, Linjie
We present DiffPortrait3D, a conditional diffusion model that is capable of synthesizing 3D-consistent photo-realistic novel views from as few as a single in-the-wild portrait. Specifically, given a single RGB input, we aim to synthesize plausible bu
Externí odkaz:
http://arxiv.org/abs/2312.13016
Autor:
Chang, Di, Shi, Yichun, Gao, Quankai, Fu, Jessica, Xu, Hongyi, Song, Guoxian, Yan, Qing, Zhu, Yizhe, Yang, Xiao, Soleymani, Mohammad
In this work, we propose MagicPose, a diffusion-based model for 2D human pose and facial expression retargeting. Specifically, given a reference image, we aim to generate a person's new images by controlling the poses and facial expressions while kee
Externí odkaz:
http://arxiv.org/abs/2311.12052
Autor:
Yin, Yufeng, Chang, Di, Song, Guoxian, Sang, Shen, Zhi, Tiancheng, Liu, Jing, Luo, Linjie, Soleymani, Mohammad
Automatic detection of facial Action Units (AUs) allows for objective facial expression analysis. Due to the high cost of AU labeling and the limited size of existing benchmarks, previous AU detection methods tend to overfit the dataset, resulting in
Externí odkaz:
http://arxiv.org/abs/2308.12380
Facial expression analysis is an important tool for human-computer interaction. In this paper, we introduce LibreFace, an open-source toolkit for facial expression analysis. This open-source toolbox offers real-time and offline analysis of facial beh
Externí odkaz:
http://arxiv.org/abs/2308.10713
Facial action unit detection has emerged as an important task within facial expression analysis, aimed at detecting specific pre-defined, objective facial expressions, such as lip tightening and cheek raising. This paper presents our submission to th
Externí odkaz:
http://arxiv.org/abs/2303.10590
Autor:
Chang, Di, Božič, Aljaž, Zhang, Tong, Yan, Qingsong, Chen, Yingcong, Süsstrunk, Sabine, Nießner, Matthias
Finding accurate correspondences among different views is the Achilles' heel of unsupervised Multi-View Stereo (MVS). Existing methods are built upon the assumption that corresponding pixels share similar photometric features. However, multi-view ima
Externí odkaz:
http://arxiv.org/abs/2203.03949
Multi-view Stereo (MVS) with known camera parameters is essentially a 1D search problem within a valid depth range. Recent deep learning-based MVS methods typically densely sample depth hypotheses in the depth range, and then construct prohibitively
Externí odkaz:
http://arxiv.org/abs/2112.02338
Autor:
Chang, Di
Object detection is a very important basic research direction in the field of computer vision and a basic method for other advanced tasks in the field of computer vision. It has been widely used in practical applications such as object tracking, vide
Externí odkaz:
http://arxiv.org/abs/2111.12982