Boosting LiDAR-Based Semantic Labeling by Cross-modal Training Data Generation

Autor: Markus Enzweiler, David Pfeiffer, Manuel Schäfer, Nick Schneider, J. Marius Zollner, Beate Schwarz, David Peter, Florian Piewak, Peter Pinggera
Rok vydání: 2019
Předmět:
Zdroj: Lecture Notes in Computer Science ISBN: 9783030110239
ECCV Workshops (6)
DOI: 10.1007/978-3-030-11024-6_39
Popis: Mobile robots and autonomous vehicles rely on multi-modal sensor setups to perceive and understand their surroundings. Aside from cameras, LiDAR sensors represent a central component of state-of-the-art perception systems. In addition to accurate spatial perception, a comprehensive semantic understanding of the environment is essential for efficient and safe operation. In this paper we present a novel deep neural network architecture called LiLaNet for point-wise, multi-class semantic labeling of semi-dense LiDAR data. The network utilizes virtual image projections of the 3D point clouds for efficient inference. Further, we propose an automated process for large-scale cross-modal training data generation called Autolabeling, in order to boost semantic labeling performance while keeping the manual annotation effort low. The effectiveness of the proposed network architecture as well as the automated data generation process is demonstrated on a manually annotated ground truth dataset. LiLaNet is shown to significantly outperform current state-of-the-art CNN architectures for LiDAR data. Applying our automatically generated large-scale training data yields a boost of up to 14% points compared to networks trained on manually annotated data only.
Databáze: OpenAIRE