Zobrazeno 1 - 10
of 61
pro vyhledávání: '"Yu Yingchen"'
Autor:
Yu Yingchen
Publikováno v:
Journal of World Languages, Vol 9, Iss 2, Pp 308-314 (2023)
Externí odkaz:
https://doaj.org/article/738f837c6f4949d288735173a0a31a51
Learning-based Text-to-Image (TTI) models like Stable Diffusion have revolutionized the way visual content is generated in various domains. However, recent research has shown that nonnegligible social bias exists in current state-of-the-art TTI syste
Externí odkaz:
http://arxiv.org/abs/2402.14577
Autor:
Yu Yingchen
Publikováno v:
E3S Web of Conferences, Vol 253, p 02024 (2021)
In recent years, the manufacturing industry has developed rapidly with fierce competition. Manufacturing enterprises are faced with the challenge of on-time delivery, multiple product choices and quick response to order modification. Reasonable and e
Externí odkaz:
https://doaj.org/article/53f2f7555a5f4acd9fb0f5f141c9387e
Autor:
Zhang, Jiahui, Zhan, Fangneng, Yu, Yingchen, Liu, Kunhao, Wu, Rongliang, Zhang, Xiaoqin, Shao, Ling, Lu, Shijian
Pose-free neural radiance fields (NeRF) aim to train NeRF with unposed multi-view images and it has achieved very impressive success in recent years. Most existing works share the pipeline of training a coarse pose estimator with rendered images at f
Externí odkaz:
http://arxiv.org/abs/2308.15049
Autor:
Xu, Muyu, Zhan, Fangneng, Zhang, Jiahui, Yu, Yingchen, Zhang, Xiaoqin, Theobalt, Christian, Shao, Ling, Lu, Shijian
Neural Radiance Field (NeRF) has shown impressive performance in novel view synthesis via implicit scene representation. However, it usually suffers from poor scalability as requiring densely sampled images for each new scene. Several studies have at
Externí odkaz:
http://arxiv.org/abs/2308.04826
Autor:
Liu, Kunhao, Zhan, Fangneng, Zhang, Jiahui, Xu, Muyu, Yu, Yingchen, Saddik, Abdulmotaleb El, Theobalt, Christian, Xing, Eric, Lu, Shijian
Open-vocabulary segmentation of 3D scenes is a fundamental function of human perception and thus a crucial objective in computer vision research. However, this task is heavily impeded by the lack of large-scale and diverse 3D open-vocabulary segmenta
Externí odkaz:
http://arxiv.org/abs/2305.14093
Audio-driven talking face generation, which aims to synthesize talking faces with realistic facial animations (including accurate lip movements, vivid facial expression details and natural head poses) corresponding to the audio, has achieved rapid pr
Externí odkaz:
http://arxiv.org/abs/2304.08945
Facial expression editing has attracted increasing attention with the advance of deep neural networks in recent years. However, most existing methods suffer from compromised editing fidelity and limited usability as they either ignore pose variations
Externí odkaz:
http://arxiv.org/abs/2304.08938
Publikováno v:
CVPR2023
Generative Adversarial Networks (GANs) rely heavily on large-scale training data for training high-quality image generation models. With limited training data, the GAN discriminator often suffers from severe overfitting which directly leads to degrad
Externí odkaz:
http://arxiv.org/abs/2303.17158
Autor:
Liu, Kunhao, Zhan, Fangneng, Chen, Yiwen, Zhang, Jiahui, Yu, Yingchen, Saddik, Abdulmotaleb El, Lu, Shijian, Xing, Eric
3D style transfer aims to render stylized novel views of a 3D scene with multi-view consistency. However, most existing work suffers from a three-way dilemma over accurate geometry reconstruction, high-quality stylization, and being generalizable to
Externí odkaz:
http://arxiv.org/abs/2303.10598