Zobrazeno 1 - 10
of 26
pro vyhledávání: '"Guo, Yu‐Xiao"'
Autor:
Xu, Sicheng, Chen, Guojun, Guo, Yu-Xiao, Yang, Jiaolong, Li, Chong, Zang, Zhenyu, Zhang, Yizhong, Tong, Xin, Guo, Baining
We introduce VASA, a framework for generating lifelike talking faces with appealing visual affective skills (VAS) given a single static image and a speech audio clip. Our premiere model, VASA-1, is capable of not only generating lip movements that ar
Externí odkaz:
http://arxiv.org/abs/2404.10667
As a promising 3D generation technique, multiview diffusion (MVD) has received a lot of attention due to its advantages in terms of generalizability, quality, and efficiency. By finetuning pretrained large image diffusion models with 3D data, the MVD
Externí odkaz:
http://arxiv.org/abs/2402.14253
Data diversity and abundance are essential for improving the performance and generalization of models in natural language processing and 2D vision. However, 3D vision domain suffers from the lack of 3D data, and simply combining multiple 3D datasets
Externí odkaz:
http://arxiv.org/abs/2402.14215
Autor:
Yang, Yu-Qi, Guo, Yu-Xiao, Xiong, Jian-Yu, Liu, Yang, Pan, Hao, Wang, Peng-Shuai, Tong, Xin, Guo, Baining
The use of pretrained backbones with fine-tuning has been successful for 2D vision and natural language processing tasks, showing advantages over task-specific networks. In this work, we introduce a pretrained 3D backbone, called {\SST}, for 3D indoo
Externí odkaz:
http://arxiv.org/abs/2304.06906
Autor:
Guo, Yu Xiao, He, Yu Xi
Publikováno v:
In Colloid and Interface Science Communications July 2024 61
We present a method for creating 3D indoor scenes with a generative model learned from a collection of semantic-segmented depth images captured from different unknown scenes. Given a room with a specified size, our method automatically generates 3D o
Externí odkaz:
http://arxiv.org/abs/2108.09022
Publikováno v:
In Journal of Integrative Agriculture September 2023 22(9):2893-2904
Autor:
Guo, Yu-Xiao, Tong, Xin
We introduce a View-Volume convolutional neural network (VVNet) for inferring the occupancy and semantic labels of a volumetric 3D scene from a single depth image. The VVNet concatenates a 2D view CNN and a 3D volume CNN with a differentiable project
Externí odkaz:
http://arxiv.org/abs/1806.05361
We present O-CNN, an Octree-based Convolutional Neural Network (CNN) for 3D shape analysis. Built upon the octree representation of 3D shapes, our method takes the average normal vectors of a 3D model sampled in the finest leaf octants as input and p
Externí odkaz:
http://arxiv.org/abs/1712.01537
Publikováno v:
Computer Graphics Forum. Oct2022, Vol. 41 Issue 7, p237-246. 10p. 8 Color Photographs, 1 Black and White Photograph, 1 Diagram, 3 Charts.