Multimodal Early Raw Data Fusion for Environment Sensing in Automotive Applications

+Object+identification%22&type=SU">CCS Concepts: Computing methodologies --> Object identification, Applied computing --> Transportation -->
Popis: Autonomous Vehicles became every day closer to becoming a reality in ground transportation. Computational advancement has enabled powerful methods to process large amounts of data required to drive on streets safely. The fusion of multiple sensors presented in the vehicle allows building accurate world models to improve autonomous vehicles' navigation. Among the current techniques, the fusion of LIDAR, RADAR, and Camera data by Neural Networks has shown significant improvement in object detection and geometry and dynamic behavior estimation. Main methods propose using parallel networks to fuse the sensors' measurement, increasing complexity and demand for computational resources. The fusion of the data using a single neural network is still an open question and the project's main focus. The aim is to develop a single neural network architecture to fuse the three types of sensors and evaluate and compare the resulting approach with multi-neural network proposals.
Posters
Marcelo Eduardo Pederiva, José Mario De Martino, and Alessandro Zimmer
DOI: 10.2312/egp.20221006
Přístupová URL adresa: https://explore.openaire.eu/search/publication?articleId=doi_________::cd4bf7ebdda2289f34e591005d846f2c
Přírůstkové číslo: edsair.doi...........cd4bf7ebdda2289f34e591005d846f2c
Autor: Pederiva, Marcelo Eduardo, Martino, José Mario De, Zimmer, Alessandro
Rok vydání: 2022
Předmět:
DOI: 10.2312/egp.20221006
Popis: Autonomous Vehicles became every day closer to becoming a reality in ground transportation. Computational advancement has enabled powerful methods to process large amounts of data required to drive on streets safely. The fusion of multiple sensors presented in the vehicle allows building accurate world models to improve autonomous vehicles' navigation. Among the current techniques, the fusion of LIDAR, RADAR, and Camera data by Neural Networks has shown significant improvement in object detection and geometry and dynamic behavior estimation. Main methods propose using parallel networks to fuse the sensors' measurement, increasing complexity and demand for computational resources. The fusion of the data using a single neural network is still an open question and the project's main focus. The aim is to develop a single neural network architecture to fuse the three types of sensors and evaluate and compare the resulting approach with multi-neural network proposals.
Posters
Marcelo Eduardo Pederiva, José Mario De Martino, and Alessandro Zimmer
Databáze: OpenAIRE