Zobrazeno 1 - 10
of 763
pro vyhledávání: '"LI Zhengqi"'
Autor:
Cai, Ruojin, Zhang, Jason Y., Henzler, Philipp, Li, Zhengqi, Snavely, Noah, Martin-Brualla, Ricardo
Pairwise pose estimation from images with little or no overlap is an open challenge in computer vision. Existing methods, even those trained on large-scale datasets, struggle in these scenarios due to the lack of identifiable correspondences or visua
Externí odkaz:
http://arxiv.org/abs/2412.16155
Learning to understand dynamic 3D scenes from imagery is crucial for applications ranging from robotics to scene reconstruction. Yet, unlike other problems where large-scale supervised training has enabled rapid progress, directly supervising methods
Externí odkaz:
http://arxiv.org/abs/2412.09621
Autor:
Li, Zhengqi, Tucker, Richard, Cole, Forrester, Wang, Qianqian, Jin, Linyi, Ye, Vickie, Kanazawa, Angjoo, Holynski, Aleksander, Snavely, Noah
We present a system that allows for accurate, fast, and robust estimation of camera parameters and depth maps from casual monocular videos of dynamic scenes. Most conventional structure from motion and monocular SLAM techniques assume input videos th
Externí odkaz:
http://arxiv.org/abs/2412.04463
Monocular dynamic reconstruction is a challenging and long-standing vision problem due to the highly ill-posed nature of the task. Existing approaches are limited in that they either depend on templates, are effective only in quasi-static scenes, or
Externí odkaz:
http://arxiv.org/abs/2407.13764
Autor:
Deng, Boyang, Tucker, Richard, Li, Zhengqi, Guibas, Leonidas, Snavely, Noah, Wetzstein, Gordon
We present a method for generating Streetscapes-long sequences of views through an on-the-fly synthesized city-scale scene. Our generation is conditioned by language input (e.g., city name, weather), as well as an underlying map/layout hosting the de
Externí odkaz:
http://arxiv.org/abs/2407.13759
We present an approach to modeling an image-space prior on scene motion. Our prior is learned from a collection of motion trajectories extracted from real video sequences depicting natural, oscillatory dynamics such as trees, flowers, candles, and cl
Externí odkaz:
http://arxiv.org/abs/2309.07906
Autor:
Wang, Qianqian, Chang, Yen-Yu, Cai, Ruojin, Li, Zhengqi, Hariharan, Bharath, Holynski, Aleksander, Snavely, Noah
We present a new test-time optimization method for estimating dense and long-range motion from a video sequence. Prior optical flow or particle video tracking algorithms typically operate within limited temporal windows, struggling to track through o
Externí odkaz:
http://arxiv.org/abs/2306.05422
Despite increasingly realistic image quality, recent 3D image generative models often operate on 3D volumes of fixed extent with limited camera motions. We investigate the task of unconditionally synthesizing unbounded nature scenes, enabling arbitra
Externí odkaz:
http://arxiv.org/abs/2303.13515
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene. State-of-the-art methods based on temporally varying Neural Radiance Fields (aka dynamic NeRFs) have shown impressive results on this task. H
Externí odkaz:
http://arxiv.org/abs/2211.11082