Autor: |
Bao, Yongtang, Su, Chunjian, Qi, Yutong, Geng, Yanbing, Li, Haojie |
Zdroj: |
ACM Transactions on Multimedia Computing, Communications & Applications; Dec2024, Vol. 20 Issue 12, p1-20, 20p |
Abstrakt: |
Category-level pose estimation is proposed to predict the 6D pose of objects under a specific category and has wide applications in fields such as robotics, virtual reality, and autonomous driving. With the development of VR/AR technology, pose estimation has gradually become a research hotspot in 3D scene understanding. However, most methods fail to fully utilize geometric and color information to solve intra-class shape variations, which leads to inaccurate prediction results. To solve the above problems, we propose a novel pose estimation and iterative refinement network, use an attention mechanism to fuse multi-modal information to obtain color features after a coordinate transformation, and design iterative modules to ensure the accuracy of object geometric features. Specifically, we use an encoder-decoder architecture to implicitly generate a coarse-grained initial pose and refine it through an iterative refinement module. In addition, due to the differences between rotation and position estimation, we design a multi-head pose decoder that utilizes the local geometry and global features. Finally, we design a transformer-based coordinate transformation attention module to extract pose-sensitive features from RGB images and supervise color information by correlating point cloud features in different coordinate systems. We train and test our network on the synthetic dataset CAMERA25 and the real dataset REAL275. Experimental results show that our method achieves state-of-the-art performance on multiple evaluation metrics. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|