Unsupervised monocular visual odometry via combining instance and RGB information
Autor: | Min Yue, Guangyuan Fu, Ming Wu, Hongyang Gu, Erliang Yao |
---|---|
Rok vydání: | 2022 |
Předmět: | |
Zdroj: | Applied Optics. 61:3793 |
ISSN: | 2155-3165 1559-128X |
Popis: | Unsupervised deep learning methods have made significant progress in monocular visual odometry (VO) tasks. However, due to the complexity of the real-world scene, learning the camera ego-motion from the RGB information of monocular images in an unsupervised way is still challenging. Existing methods mainly learn motion from the original RGB information, lacking higher-level input from scene understanding. Hence, this paper proposes an unsupervised monocular VO framework that combines the instance and RGB information, named combined information based (CI-VO). The proposed method includes two stages. First is obtaining the instance maps of the monocular images, without finetuning on the VO dataset. Then we obtain the combined information from the two types of information, which is input into the proposed combined information based pose estimation network, named CI-PoseNet, to estimate the relative pose of the camera. To make better use of the two types of information, we propose a fusion feature extraction network to extract the fused features from the combined information. Experiments on the KITTI odometry and KITTI raw dataset show that the proposed method has good performance in the camera pose estimation task, which exceeds the existing mainstream methods. |
Databáze: | OpenAIRE |
Externí odkaz: |