Zobrazeno 1 - 10
of 26
pro vyhledávání: '"Xian, Wenqi"'
Autor:
He, Mingming, Clausen, Pascal, Taşel, Ahmet Levent, Ma, Li, Pilarski, Oliver, Xian, Wenqi, Rikker, Laszlo, Yu, Xueming, Burgert, Ryan, Yu, Ning, Debevec, Paul
We present a novel framework for free-viewpoint facial performance relighting using diffusion-based image-to-image translation. Leveraging a subject-specific dataset containing diverse facial expressions captured under various lighting conditions, in
Externí odkaz:
http://arxiv.org/abs/2410.08188
Recent methods for 3D reconstruction and rendering increasingly benefit from end-to-end optimization of the entire image formation process. However, this approach is currently limited: effects of the optical hardware stack and in particular lenses ar
Externí odkaz:
http://arxiv.org/abs/2304.04848
We propose "factor matting", an alternative formulation of the video matting problem in terms of counterfactual video synthesis that is better suited for re-composition tasks. The goal of factor matting is to separate the contents of video into indep
Externí odkaz:
http://arxiv.org/abs/2211.02145
Autor:
Luo, Katie, Yang, Guandao, Xian, Wenqi, Haraldsson, Harald, Hariharan, Bharath, Belongie, Serge
Publikováno v:
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 10050-10060
In applications such as optical see-through and projector augmented reality, producing images amounts to solving non-negative image generation, where one can only add light to an existing image. Most image generation methods, however, are ill-suited
Externí odkaz:
http://arxiv.org/abs/2202.00659
We present a method that learns a spatiotemporal neural irradiance field for dynamic scenes from a single video. Our learned representation enables free-viewpoint rendering of the input video. Our method builds upon recent advances in implicit repres
Externí odkaz:
http://arxiv.org/abs/2011.12950
Many popular tourist landmarks are captured in a multitude of online, public photos. These photos represent a sparse and unstructured sampling of the plenoptic function for a particular scene. In this paper,we present a new approach to novel view syn
Externí odkaz:
http://arxiv.org/abs/2007.15194
Autor:
Xian, Wenqi, Li, Zhengqi, Fisher, Matthew, Eisenmann, Jonathan, Shechtman, Eli, Snavely, Noah
We introduce UprightNet, a learning-based approach for estimating 2DoF camera orientation from a single RGB image of an indoor scene. Unlike recent methods that leverage deep learning to perform black-box regression from image to orientation paramete
Externí odkaz:
http://arxiv.org/abs/1908.07070
Autor:
Yu, Fisher, Chen, Haofeng, Wang, Xin, Xian, Wenqi, Chen, Yingying, Liu, Fangchen, Madhavan, Vashisht, Darrell, Trevor
Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on
Externí odkaz:
http://arxiv.org/abs/1805.04687
Autor:
Xian, Wenqi, Sangkloy, Patsorn, Agrawal, Varun, Raj, Amit, Lu, Jingwan, Fang, Chen, Yu, Fisher, Hays, James
In this paper, we investigate deep image synthesis guided by sketch, color, and texture. Previous image synthesis methods can be controlled by sketch and color strokes but we are the first to examine texture control. We allow a user to place a textur
Externí odkaz:
http://arxiv.org/abs/1706.02823
Autor:
He, Zonglin1 (AUTHOR), Xian, Wenqi1 (AUTHOR), Ding, Zhu1 (AUTHOR), Wang, Chaozhi1 (AUTHOR), Huang, Zhenhong1 (AUTHOR), Song, Lina1 (AUTHOR) Songlina@gdut.edu.cn, Liu, Baohua1 (AUTHOR) baohua@gdut.edu.cn
Publikováno v:
Journal of Polymer Research. Nov2022, Vol. 29 Issue 11, p1-12. 12p.