Zobrazeno 1 - 10
of 402
pro vyhledávání: '"XIE, Zhenyu"'
Image-based 3D Virtual Try-ON (VTON) aims to sculpt the 3D human according to person and clothes images, which is data-efficient (i.e., getting rid of expensive 3D data) but challenging. Recent text-to-3D methods achieve remarkable improvement in hig
Externí odkaz:
http://arxiv.org/abs/2407.16511
Autor:
Jiang, Hong, Jin, Longmei, Qian, Xu, Xiong, Xu, La, Xuena, Chen, Weiyi, Yang, Xiaoguang, Yang, Fengyun, Zhang, Xinwen, Abudukelimu, Nazhakaiti, Li, Xingying, Xie, Zhenyu, Zhu, Xiaoling, Zhang, Xiaohua, Zhang, Lifeng, Wang, Li, Li, Lingling, Li, Mu
Publikováno v:
Journal of Medical Internet Research, Vol 23, Iss 1, p e18722 (2021)
BackgroundChina was the first country in the world to experience a large-scale COVID-19 outbreak. The rapid spread of the disease and enforcement of public health measures has caused distress among vulnerable populations such as pregnant women. With
Externí odkaz:
https://doaj.org/article/f992c47f11d34943a397621706664390
Autor:
Jin, Xing, Xie, Zhenyu, Zhang, Xiangpeng, Hou, Hanfei, Zhang, Fangxing, Zhang, Xuanyi, Chang, Lin, Gong, Qihuang, Yang, Qi-Fan
Optical frequency division relies on optical frequency combs to coherently translate ultra-stable optical frequency references to the microwave domain. This technology has enabled microwave synthesis with ultralow timing noise, but the required instr
Externí odkaz:
http://arxiv.org/abs/2401.12760
In this paper, we propose a novel cascaded diffusion-based generative framework for text-driven human motion synthesis, which exploits a strategy named GradUally Enriching SyntheSis (GUESS as its abbreviation). The strategy sets up generation objecti
Externí odkaz:
http://arxiv.org/abs/2401.02142
Text-guided motion synthesis aims to generate 3D human motion that not only precisely reflects the textual description but reveals the motion details as much as possible. Pioneering methods explore the diffusion model for text-to-motion synthesis and
Externí odkaz:
http://arxiv.org/abs/2312.10960
Autor:
zhang, xujie, Li, Xiu, Kampffmeyer, Michael, Dong, Xin, Xie, Zhenyu, Zhu, Feida, Dong, Haoye, Liang, Xiaodan
Image-based Virtual Try-On (VITON) aims to transfer an in-shop garment image onto a target person. While existing methods focus on warping the garment to fit the body pose, they often overlook the synthesis quality around the garment-skin boundary an
Externí odkaz:
http://arxiv.org/abs/2312.03667
The utilization of Large Language Models (LLMs) for the construction of AI systems has garnered significant attention across diverse fields. The extension of LLMs to the domain of fashion holds substantial commercial potential but also inherent chall
Externí odkaz:
http://arxiv.org/abs/2307.13240
GP-VTON: Towards General Purpose Virtual Try-on via Collaborative Local-Flow Global-Parsing Learning
Autor:
Xie, Zhenyu, Huang, Zaiyu, Dong, Xin, Zhao, Fuwei, Dong, Haoye, Zhang, Xijin, Zhu, Feida, Liang, Xiaodan
Image-based Virtual Try-ON aims to transfer an in-shop garment onto a specific person. Existing methods employ a global warping module to model the anisotropic deformation for different garment parts, which fails to preserve the semantic information
Externí odkaz:
http://arxiv.org/abs/2303.13756
In this paper, we target image-based person-to-person virtual try-on in the presence of diverse poses and large viewpoint variations. Existing methods are restricted in this setting as they estimate garment warping flows mainly based on 2D poses and
Externí odkaz:
http://arxiv.org/abs/2211.14052
Autor:
Zhang, Xujie, Sha, Yu, Kampffmeyer, Michael C., Xie, Zhenyu, Jie, Zequn, Huang, Chengwen, Peng, Jianqing, Liang, Xiaodan
Cross-modal fashion image synthesis has emerged as one of the most promising directions in the generation domain due to the vast untapped potential of incorporating multiple modalities and the wide range of fashion image applications. To facilitate a
Externí odkaz:
http://arxiv.org/abs/2208.05621