Omnidirectional CNN for Visual Place Recognition and Navigation
Autor: | Hung-Jui Huang, Kuo-Hao Zeng, Chan-Wei Hu, Min Sun, Tsun-Hsuan Wang, Juan-Ting Lin |
---|---|
Rok vydání: | 2018 |
Předmět: |
FOS: Computer and information sciences
0209 industrial biotechnology Computer science business.industry Computer Vision and Pattern Recognition (cs.CV) Image and Video Processing (eess.IV) 010102 general mathematics Feature extraction Computer Science - Computer Vision and Pattern Recognition ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION 02 engineering and technology Electrical Engineering and Systems Science - Image and Video Processing 01 natural sciences Convolutional neural network Visualization 020901 industrial engineering & automation FOS: Electrical engineering electronic engineering information engineering Feature (machine learning) Robot Computer vision Artificial intelligence 0101 mathematics Omnidirectional antenna business Rotation (mathematics) |
Zdroj: | ICRA |
DOI: | 10.1109/icra.2018.8463173 |
Popis: | $ $Visual place recognition is challenging, especially when only a few place exemplars are given. To mitigate the challenge, we consider place recognition method using omnidirectional cameras and propose a novel Omnidirectional Convolutional Neural Network (O-CNN) to handle severe camera pose variation. Given a visual input, the task of the O-CNN is not to retrieve the matched place exemplar, but to retrieve the closest place exemplar and estimate the relative distance between the input and the closest place. With the ability to estimate relative distance, a heuristic policy is proposed to navigate a robot to the retrieved closest place. Note that the network is designed to take advantage of the omnidirectional view by incorporating circular padding and rotation invariance. To train a powerful O-CNN, we build a virtual world for training on a large scale. We also propose a continuous lifted structured feature embedding loss to learn the concept of distance efficiently. Finally, our experimental results confirm that our method achieves state-of-the-art accuracy and speed with both the virtual world and real-world datasets. Comment: 8 pages. 6 figures. Accepted to 2018 IEEE International Conference on Robotics and Automation (ICRA 2018) |
Databáze: | OpenAIRE |
Externí odkaz: |