Zobrazeno 1 - 10
of 35
pro vyhledávání: '"Zhang, Ruida"'
Autor:
Zhang, Ruida, Huang, Ziqin, Wang, Gu, Zhang, Chenyangguang, Di, Yan, Zuo, Xingxing, Tang, Jiwen, Ji, Xiangyang
While RGBD-based methods for category-level object pose estimation hold promise, their reliance on depth data limits their applicability in diverse scenarios. In response, recent efforts have turned to RGB-based methods; however, they face significan
Externí odkaz:
http://arxiv.org/abs/2409.15727
In robotic vision, a de-facto paradigm is to learn in simulated environments and then transfer to real-world applications, which poses an essential challenge in bridging the sim-to-real domain gap. While mainstream works tackle this problem in the RG
Externí odkaz:
http://arxiv.org/abs/2404.03962
Autor:
Zhang, Ruida, Zhang, Chenyangguang, Di, Yan, Manhardt, Fabian, Liu, Xingyu, Tombari, Federico, Ji, Xiangyang
In this paper, we present KP-RED, a unified KeyPoint-driven REtrieval and Deformation framework that takes object scans as input and jointly retrieves and deforms the most geometrically similar CAD models from a pre-processed database to tightly matc
Externí odkaz:
http://arxiv.org/abs/2403.10099
Autor:
Chen, Yamei, Di, Yan, Zhai, Guangyao, Manhardt, Fabian, Zhang, Chenyangguang, Zhang, Ruida, Tombari, Federico, Navab, Nassir, Busam, Benjamin
Category-level object pose estimation, aiming to predict the 6D pose and 3D size of objects from known categories, typically struggles with large intra-class shape variation. Existing works utilizing mean shapes often fall short of capturing this var
Externí odkaz:
http://arxiv.org/abs/2311.11125
Autor:
Di, Yan, Zhang, Chenyangguang, Wang, Chaowei, Zhang, Ruida, Zhai, Guangyao, Li, Yanyan, Fu, Bowen, Ji, Xiangyang, Gao, Shan
In this paper, we present ShapeMatcher, a unified self-supervised learning framework for joint shape canonicalization, segmentation, retrieval and deformation. Given a partially-observed object in an arbitrary pose, we first canonicalize the object b
Externí odkaz:
http://arxiv.org/abs/2311.11106
Autor:
Zhang, Chenyangguang, Jiao, Guanlong, Di, Yan, Wang, Gu, Huang, Ziqin, Zhang, Ruida, Manhardt, Fabian, Fu, Bowen, Tombari, Federico, Ji, Xiangyang
Previous works concerning single-view hand-held object reconstruction typically rely on supervision from 3D ground-truth models, which are hard to collect in real world. In contrast, readily accessible hand-object videos offer a promising training da
Externí odkaz:
http://arxiv.org/abs/2310.11696
Autor:
Zhang, Chenyangguang, Di, Yan, Zhang, Ruida, Zhai, Guangyao, Manhardt, Fabian, Tombari, Federico, Ji, Xiangyang
Reconstructing hand-held objects from a single RGB image is an important and challenging problem. Existing works utilizing Signed Distance Fields (SDF) reveal limitations in comprehensively capturing the complex hand-object interactions, since SDF is
Externí odkaz:
http://arxiv.org/abs/2308.08231
Autor:
Di, Yan, Zhang, Chenyangguang, Wang, Pengyuan, Zhai, Guangyao, Zhang, Ruida, Manhardt, Fabian, Busam, Benjamin, Ji, Xiangyang, Tombari, Federico
In this paper, we present a novel shape reconstruction method leveraging diffusion model to generate 3D sparse point cloud for the object captured in a single RGB image. Recent methods typically leverage global embedding or local projection-based fea
Externí odkaz:
http://arxiv.org/abs/2308.07837
Autor:
Di, Yan, Zhang, Chenyangguang, Zhang, Ruida, Manhardt, Fabian, Su, Yongzhi, Rambach, Jason, Stricker, Didier, Ji, Xiangyang, Tombari, Federico
In this paper, we propose U-RED, an Unsupervised shape REtrieval and Deformation pipeline that takes an arbitrary object observation as input, typically captured by RGB images or scans, and jointly retrieves and deforms the geometrically similar CAD
Externí odkaz:
http://arxiv.org/abs/2308.06383
Category-level pose estimation is a challenging problem due to intra-class shape variations. Recent methods deform pre-computed shape priors to map the observed point cloud into the normalized object coordinate space and then retrieve the pose via po
Externí odkaz:
http://arxiv.org/abs/2208.06661