Improved Semantic Stixels via Multimodal Sensor Fusion
Autor: | Markus Enzweiler, Florian Piewak, Peter Pinggera, Marius Zöllner, David Pfeiffer |
---|---|
Rok vydání: | 2019 |
Předmět: |
Ground truth
Modality (human–computer interaction) Optimization problem Computer science business.industry ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION 02 engineering and technology 010501 environmental sciences Sensor fusion 01 natural sciences Lidar Stereopsis 0202 electrical engineering electronic engineering information engineering RGB color model 020201 artificial intelligence & image processing Computer vision Artificial intelligence Representation (mathematics) business 0105 earth and related environmental sciences |
Zdroj: | Lecture Notes in Computer Science ISBN: 9783030129385 GCPR |
DOI: | 10.1007/978-3-030-12939-2_31 |
Popis: | This paper presents a compact and accurate representation of 3D scenes that are observed by a LiDAR sensor and a monocular camera. The proposed method is based on the well-established Stixel model originally developed for stereo vision applications. We extend this Stixel concept to incorporate data from multiple sensor modalities. The resulting mid-level fusion scheme takes full advantage of the geometric accuracy of LiDAR measurements as well as the high resolution and semantic detail of RGB images. The obtained environment model provides a geometrically and semantically consistent representation of the 3D scene at a significantly reduced amount of data while minimizing information loss at the same time. Since the different sensor modalities are considered as input to a joint optimization problem, the solution is obtained with only minor computational overhead. We demonstrate the effectiveness of the proposed multimodal Stixel algorithm on a manually annotated ground truth dataset. Our results indicate that the proposed mid-level fusion of LiDAR and camera data improves both the geometric and semantic accuracy of the Stixel model significantly while reducing the computational overhead as well as the amount of generated data in comparison to using a single modality on its own. |
Databáze: | OpenAIRE |
Externí odkaz: |