Single-Stage 6D Object Pose Estimation
Autor: | Pascal Fua, Wei Wang, Yinlin Hu, Mathieu Salzmann |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
6D object pose estimation Computer science business.industry Computer Vision and Pattern Recognition (cs.CV) Feature extraction Computer Science - Computer Vision and Pattern Recognition Process (computing) Pattern recognition 02 engineering and technology 010501 environmental sciences RANSAC Object (computer science) 01 natural sciences Object detection Image (mathematics) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Artificial intelligence business Pose 0105 earth and related environmental sciences |
Zdroj: | CVPR |
DOI: | 10.1109/cvpr42600.2020.00300 |
Popis: | Most recent 6D pose estimation frameworks first rely on a deep network to establish correspondences between 3D object keypoints and 2D image locations and then use a variant of a RANSAC-based Perspective-n-Point (PnP) algorithm. This two-stage process, however, is suboptimal: First, it is not end-to-end trainable. Second, training the deep network relies on a surrogate loss that does not directly reflect the final 6D pose estimation task. In this work, we introduce a deep architecture that directly regresses 6D poses from correspondences. It takes as input a group of candidate correspondences for each 3D keypoint and accounts for the fact that the order of the correspondences within each group is irrelevant, while the order of the groups, that is, of the 3D keypoints, is fixed. Our architecture is generic and can thus be exploited in conjunction with existing correspondence-extraction networks so as to yield single-stage 6D pose estimation frameworks. Our experiments demonstrate that these single-stage frameworks consistently outperform their two-stage counterparts in terms of both accuracy and speed. CVPR 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |