A Hybrid Compact Neural Architecture for Visual Place Recognition

Autor: Luis Hernandez-Nunez, Andrew B. Barron, Ajay Narendra, Michael Milford, Marvin Chancán
Rok vydání: 2020
Předmět:
FOS: Computer and information sciences
Computer Science - Machine Learning
0209 industrial biotechnology
Control and Optimization
Computer science
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
Biomedical Engineering
02 engineering and technology
Spatial memory
Machine Learning (cs.LG)
Computer Science - Robotics
020901 industrial engineering & automation
Artificial Intelligence
0202 electrical engineering
electronic engineering
information engineering

Image retrieval
Artificial neural network
business.industry
Mechanical Engineering
Deep learning
Pattern recognition
Computer Science Applications
Human-Computer Interaction
Control and Systems Engineering
Benchmark (computing)
Key (cryptography)
020201 artificial intelligence & image processing
Computer Vision and Pattern Recognition
Artificial intelligence
business
Robotics (cs.RO)
Zdroj: IEEE Robotics and Automation Letters. 5:993-1000
ISSN: 2377-3774
DOI: 10.1109/lra.2020.2967324
Popis: State-of-the-art algorithms for visual place recognition, and related visual navigation systems, can be broadly split into two categories: computer-science-oriented models including deep learning or image retrieval-based techniques with minimal biological plausibility, and neuroscience-oriented dynamical networks that model temporal properties underlying spatial navigation in the brain. In this letter, we propose a new compact and high-performing place recognition model that bridges this divide for the first time. Our approach comprises two key neural models of these categories: (1) FlyNet, a compact, sparse two-layer neural network inspired by brain architectures of fruit flies, Drosophila melanogaster, and (2) a one-dimensional continuous attractor neural network (CANN). The resulting FlyNet+CANN network incorporates the compact pattern recognition capabilities of our FlyNet model with the powerful temporal filtering capabilities of an equally compact CANN, replicating entirely in a hybrid neural implementation the functionality that yields high performance in algorithmic localization approaches like SeqSLAM. We evaluate our model, and compare it to three state-of-the-art methods, on two benchmark real-world datasets with small viewpoint variations and extreme environmental changes - achieving 87% AUC results under day to night transitions compared to 60% for Multi-Process Fusion, 46% for LoST-X and 1% for SeqSLAM, while being 6.5, 310, and 1.5 times faster, respectively.
Preprint version of article published in IEEE Robotics and Automation Letters
Databáze: OpenAIRE