Zobrazeno 1 - 10
of 202
pro vyhledávání: '"Lei, Jiahui"'
We introduce 4D Motion Scaffolds (MoSca), a neural information processing system designed to reconstruct and synthesize novel views of dynamic scenes from monocular videos captured casually in the wild. To address such a challenging and ill-posed inv
Externí odkaz:
http://arxiv.org/abs/2405.17421
We propose a novel test-time optimization approach for efficiently and robustly tracking any pixel at any time in a video. The latest state-of-the-art optimization-based tracking technique, OmniMotion, requires a prohibitively long optimization time,
Externí odkaz:
http://arxiv.org/abs/2403.17931
Accurately and efficiently modeling dynamic scenes and motions is considered so challenging a task due to temporal dynamics and motion complexity. To address these challenges, we propose DynMF, a compact and efficient representation that decomposes a
Externí odkaz:
http://arxiv.org/abs/2312.00112
We propose Neural 3D Articulation Prior (NAP), the first 3D deep generative model to synthesize 3D articulated object models. Despite the extensive research on generating 3D objects, compositions, or scenes, there remains a lack of focus on capturing
Externí odkaz:
http://arxiv.org/abs/2305.16315
Equivariance has gained strong interest as a desirable network property that inherently ensures robust generalization. However, when dealing with complex systems such as articulated objects or multi-object scenes, effectively capturing inter-part tra
Externí odkaz:
http://arxiv.org/abs/2305.16314
We introduce Equivariant Neural Field Expectation Maximization (EFEM), a simple, effective, and robust geometric algorithm that can segment objects in 3D scenes without annotations or training on scenes. We achieve such unsupervised segmentation by e
Externí odkaz:
http://arxiv.org/abs/2303.15440
3D reconstruction and novel view rendering can greatly benefit from geometric priors when the input views are not sufficient in terms of coverage and inter-view baselines. Deep learning of geometric priors from 2D images often requires each image to
Externí odkaz:
http://arxiv.org/abs/2212.14871
We introduce a unified framework for group equivariant networks on homogeneous spaces derived from a Fourier perspective. We consider tensor-valued feature fields, before and after a convolutional layer. We present a unified derivation of kernels via
Externí odkaz:
http://arxiv.org/abs/2206.08362
Autor:
Lei, Jiahui, Daniilidis, Kostas
While neural representations for static 3D shapes are widely studied, representations for deformable surfaces are limited to be template-dependent or lack efficiency. We introduce Canonical Deformation Coordinate Space (CaDeX), a unified representati
Externí odkaz:
http://arxiv.org/abs/2203.16529
Publikováno v:
In Chemical Engineering Journal 1 November 2024 499