Feature-Based and Convolutional Neural Network Fusion Method for Visual Relocalization
Autor: | Lijun Zhao, Ruifeng Li, Chee Kwang Quah, Hock Soon Seah, Li Wang, Jingwen Sun |
---|---|
Rok vydání: | 2018 |
Předmět: |
0209 industrial biotechnology
Fusion Computer science business.industry Motion blur Mobile robot Pattern recognition 02 engineering and technology RANSAC Convolutional neural network 020901 industrial engineering & automation Bag-of-words model in computer vision Robustness (computer science) 0202 electrical engineering electronic engineering information engineering Feature based 020201 artificial intelligence & image processing Artificial intelligence business |
Zdroj: | ICARCV |
DOI: | 10.1109/icarcv.2018.8581204 |
Popis: | Relocalization is one of the necessary modules for mobile robots in long-term autonomous movement in an environment. Currently, visual relocalization algorithms mainly include feature-based methods and CNN-based (Convolutional Neural Network) methods. Feature-based methods can achieve high localization accuracy in feature-rich scenes, but the error is quite large or it even fails in cases with motion blur, texture-less scene and changing view angle. CNN-based methods usually have better robustness but poor localization accuracy. For this reason, a visual relocalization algorithm that combines the advantages of the two methods is proposed in this paper. The BoVW (Bag of Visual Words) model is used to search for the most similar image in the training dataset. PnP (Perspective n Points) and RANSAC (Random Sample Consensus) are employed to estimate an initial pose. Then the number of inliers is utilized as a criterion whether the feature-based method or the CNN-based method is to be leveraged. Compared with a previous CNN-based method, PoseNet, the average position error is reduced by 45.6% and the average orientation error is reduced by 67.4% on Microsoft's 7-Scenes datasets, which verifies the effectiveness of the proposed algorithm. |
Databáze: | OpenAIRE |
Externí odkaz: |