Zobrazeno 1 - 10
of 367
pro vyhledávání: '"XIE Zhenyu"'
Publikováno v:
Shanghai yufang yixue, Vol 36, Iss 2, Pp 186-191 (2024)
ObjectiveTo investigate the current status of first aid knowledge among middle-aged and elderly residents aged 50 and above in a community in Shanghai, in order to provide reference for improving the self-rescue and mutual aid capabilities of middl
Externí odkaz:
https://doaj.org/article/b7eb7f15af2c4231a9c399c3dd407403
Autor:
WANG Xuejuan, SHAO Zhiying, ZHU Minrong, XIE Zhenyu, LYU Jingjing, ZHU Fang, DONG Bin, ZHAO Liebin, CHEN Huiwen
Publikováno v:
Shanghai yufang yixue, Vol 34, Iss 5, Pp 464-468 (2022)
ObjectiveTo investigate the value of remote consultation of heart sound acquisition in screening and referral of neonates with congenital heart diseases (CHD) in primary hospitals.MethodsA total of 4 030 neonates with non-critical diseases were s
Externí odkaz:
https://doaj.org/article/2d92146bddeb41b688983a5b3a366668
Image-based 3D Virtual Try-ON (VTON) aims to sculpt the 3D human according to person and clothes images, which is data-efficient (i.e., getting rid of expensive 3D data) but challenging. Recent text-to-3D methods achieve remarkable improvement in hig
Externí odkaz:
http://arxiv.org/abs/2407.16511
Autor:
Jin, Xing, Xie, Zhenyu, Zhang, Xiangpeng, Hou, Hanfei, Zhang, Fangxing, Zhang, Xuanyi, Chang, Lin, Gong, Qihuang, Yang, Qi-Fan
Optical frequency division relies on optical frequency combs to coherently translate ultra-stable optical frequency references to the microwave domain. This technology has enabled microwave synthesis with ultralow timing noise, but the required instr
Externí odkaz:
http://arxiv.org/abs/2401.12760
In this paper, we propose a novel cascaded diffusion-based generative framework for text-driven human motion synthesis, which exploits a strategy named GradUally Enriching SyntheSis (GUESS as its abbreviation). The strategy sets up generation objecti
Externí odkaz:
http://arxiv.org/abs/2401.02142
Text-guided motion synthesis aims to generate 3D human motion that not only precisely reflects the textual description but reveals the motion details as much as possible. Pioneering methods explore the diffusion model for text-to-motion synthesis and
Externí odkaz:
http://arxiv.org/abs/2312.10960
Autor:
zhang, xujie, Li, Xiu, Kampffmeyer, Michael, Dong, Xin, Xie, Zhenyu, Zhu, Feida, Dong, Haoye, Liang, Xiaodan
Image-based Virtual Try-On (VITON) aims to transfer an in-shop garment image onto a target person. While existing methods focus on warping the garment to fit the body pose, they often overlook the synthesis quality around the garment-skin boundary an
Externí odkaz:
http://arxiv.org/abs/2312.03667
The utilization of Large Language Models (LLMs) for the construction of AI systems has garnered significant attention across diverse fields. The extension of LLMs to the domain of fashion holds substantial commercial potential but also inherent chall
Externí odkaz:
http://arxiv.org/abs/2307.13240
GP-VTON: Towards General Purpose Virtual Try-on via Collaborative Local-Flow Global-Parsing Learning
Autor:
Xie, Zhenyu, Huang, Zaiyu, Dong, Xin, Zhao, Fuwei, Dong, Haoye, Zhang, Xijin, Zhu, Feida, Liang, Xiaodan
Image-based Virtual Try-ON aims to transfer an in-shop garment onto a specific person. Existing methods employ a global warping module to model the anisotropic deformation for different garment parts, which fails to preserve the semantic information
Externí odkaz:
http://arxiv.org/abs/2303.13756
In this paper, we target image-based person-to-person virtual try-on in the presence of diverse poses and large viewpoint variations. Existing methods are restricted in this setting as they estimate garment warping flows mainly based on 2D poses and
Externí odkaz:
http://arxiv.org/abs/2211.14052