ViT VO - A Visual Odometry technique Using CNN-Transformer Hybrid Architecture

Autor: B Jayaraj P., J Ebin, R Karthik, P N Pournami
Jazyk: angličtina
Rok vydání: 2023
Předmět:
Zdroj: ITM Web of Conferences, Vol 54, p 01004 (2023)
Druh dokumentu: article
ISSN: 2271-2097
DOI: 10.1051/itmconf/20235401004
Popis: Localization is one of the main tasks involved in the operation of autonomous agents (e.g., vehicle, robot etc.). It allows them to be able to track their paths and properly detect and avoid obstacles. Visual Odometry (VO) is one of the techniques used for agent localization. VO involves estimating the motion of an agent using the images taken by cameras attached to it. Conventional VO algorithms require specific workarounds for challenges posed by the working environment and the captured sensor data. On the other hand, Deep Learning approaches have shown tremendous efficiency and accuracy in tasks that require high degree of adaptability and scalability. In this work, a novel deep learning model is proposed to perform VO tasks for space robotic applications. The model consists of an optical flow estimation module which abstracts away scene-specific details from the input video sequence and produces an intermediate representation. The CNN module which follows next learn relative poses from the optical flow estimates. The final module is a state-of-the-art Vision Transformer, which learn absolute pose from the relative pose learnt by the CNN module. The model is trained on the KITTI dataset and has obtained a promising accuracy of approximately 2%. It has outperformed the baseline model, MagicVO, in a few sequences in the dataset.
Databáze: Directory of Open Access Journals