Zobrazeno 1 - 10
of 131
pro vyhledávání: '"Birchfield, Stan"'
Autor:
Tang, Zhenggang, Ren, Zhongzheng, Zhao, Xiaoming, Wen, Bowen, Tremblay, Jonathan, Birchfield, Stan, Schwing, Alexander
We present a method for automatically modifying a NeRF representation based on a single observation of a non-rigid transformed version of the original scene. Our method defines the transformation as a 3D flow, specifically as a weighted linear blendi
Externí odkaz:
http://arxiv.org/abs/2406.10543
Autor:
Qin, Zhen, Shen, Xuyang, Li, Dong, Sun, Weigao, Birchfield, Stan, Hartley, Richard, Zhong, Yiran
We present the Linear Complexity Sequence Model (LCSM), a comprehensive solution that unites various sequence modeling techniques with linear complexity, including linear attention, state space model, long convolution, and linear RNN, within a single
Externí odkaz:
http://arxiv.org/abs/2405.17383
Autor:
Weng, Yijia, Wen, Bowen, Tremblay, Jonathan, Blukis, Valts, Fox, Dieter, Guibas, Leonidas, Birchfield, Stan
We address the problem of building digital twins of unknown articulated objects from two RGBD scans of the object at different articulation states. We decompose the problem into two stages, each addressing distinct aspects. Our method first reconstru
Externí odkaz:
http://arxiv.org/abs/2404.01440
We present FoundationPose, a unified foundation model for 6D object pose estimation and tracking, supporting both model-based and model-free setups. Our approach can be instantly applied at test-time to a novel object without fine-tuning, as long as
Externí odkaz:
http://arxiv.org/abs/2312.08344
Autor:
Tremblay, Jonathan, Wen, Bowen, Blukis, Valts, Sundaralingam, Balakumar, Tyree, Stephen, Birchfield, Stan
We introduce Diff-DOPE, a 6-DoF pose refiner that takes as input an image, a 3D textured model of an object, and an initial pose of the object. The method uses differentiable rendering to update the object pose to minimize the visual error between th
Externí odkaz:
http://arxiv.org/abs/2310.00463
Autor:
Guo, Andrew, Wen, Bowen, Yuan, Jianhe, Tremblay, Jonathan, Tyree, Stephen, Smith, Jeffrey, Birchfield, Stan
We present the HANDAL dataset for category-level object pose estimation and affordance prediction. Unlike previous datasets, ours is focused on robotics-ready manipulable objects that are of the proper size and shape for functional grasping by robot
Externí odkaz:
http://arxiv.org/abs/2308.01477
Autor:
Sun, Fan-Yun, Tremblay, Jonathan, Blukis, Valts, Lin, Kevin, Xu, Danfei, Ivanovic, Boris, Karkus, Peter, Birchfield, Stan, Fox, Dieter, Zhang, Ruohan, Li, Yunzhu, Wu, Jiajun, Pavone, Marco, Haber, Nick
We propose Filtering Inversion (FINV), a learning framework and optimization process that predicts a renderable 3D object representation from one or few partial views. FINV addresses the challenge of synthesizing novel views of objects from partial o
Externí odkaz:
http://arxiv.org/abs/2304.00673
Autor:
Lee, Taeyeop, Tremblay, Jonathan, Blukis, Valts, Wen, Bowen, Lee, Byeong-Uk, Shin, Inkyu, Birchfield, Stan, Kweon, In So, Yoon, Kuk-Jin
Test-time adaptation methods have been gaining attention recently as a practical solution for addressing source-to-target domain gaps by gradually updating the model without requiring labels on the target data. In this paper, we propose a method of t
Externí odkaz:
http://arxiv.org/abs/2303.16730
Autor:
Wen, Bowen, Tremblay, Jonathan, Blukis, Valts, Tyree, Stephen, Muller, Thomas, Evans, Alex, Fox, Dieter, Kautz, Jan, Birchfield, Stan
We present a near real-time method for 6-DoF tracking of an unknown object from a monocular RGBD video sequence, while simultaneously performing neural 3D reconstruction of the object. Our method works for arbitrary rigid objects, even when visual te
Externí odkaz:
http://arxiv.org/abs/2303.14158
Autor:
Ye, Yufei, Li, Xueting, Gupta, Abhinav, De Mello, Shalini, Birchfield, Stan, Song, Jiaming, Tulsiani, Shubham, Liu, Sifei
Recent successes in image synthesis are powered by large-scale diffusion models. However, most methods are currently limited to either text- or image-conditioned generation for synthesizing an entire image, texture transfer or inserting objects into
Externí odkaz:
http://arxiv.org/abs/2303.12538