Appearance-invariant place recognition by discriminatively training a convolutional neural network
Autor: | Manuel Lopez-Antequera, Nicolai Petkov, Javier Gonzalez-Jimenez, Ruben Gomez-Ojeda |
---|---|
Přispěvatelé: | Intelligent Systems |
Jazyk: | angličtina |
Rok vydání: | 2017 |
Předmět: |
0209 industrial biotechnology
business.industry Computer science Pattern recognition Place recognition 02 engineering and technology Robotics Convolutional neural network 020901 industrial engineering & automation Artificial Intelligence Signal Processing 0202 electrical engineering electronic engineering information engineering Embedding 020201 artificial intelligence & image processing Computer vision Convolutional neural networks Computer Vision and Pattern Recognition Artificial intelligence Invariant (mathematics) business Software |
Zdroj: | Pattern Recognition Letters, 92, 89-95. ELSEVIER SCIENCE BV |
ISSN: | 0167-8655 |
Popis: | A convolutional neural network embedding to perform place recognition is introduced.A triplet similarity loss is chosen to allow for weakly supervised training.The network is trained with triplets of images presenting seasonal or other changes.The method is tested against state of the art solutions in challenging datasets. Visual place recognition is the task of automatically recognizing a previously visited location through its appearance, and plays a key role in mobile robotics and autonomous driving applications. The difficulty of recognizing a revisited location increases with appearance variations caused by weather, illumination or point of view changes. In this paper we present a convolutional neural network (CNN) embedding to perform place recognition, even under severe appearance changes. The network maps images to a low dimensional space where images from nearby locations map to points close to each other, despite differences in visual appearance caused by the aforementioned phenomena. In order for the network to learn the desired invariances, we train it with triplets of images selected from datasets which present a challenging variability in visual appearance. Our proposal is validated through extensive experimentation that reveals better performance than state-of-the-art methods. Importantly, though the training phase is computationally demanding, its online application is very efficient. |
Databáze: | OpenAIRE |
Externí odkaz: |