Leveraging Pre-Trained 3D Object Detection Models For Fast Ground Truth Generation

Autor: Steven L. Waslander, Ali Harakeh, Jungwook Lee, Sean Walsh
Rok vydání: 2018
Předmět:
Zdroj: ITSC
DOI: 10.48550/arxiv.1807.06072
Popis: Training 3D object detectors for autonomous driving has been limited to small datasets due to the effort required to generate annotations. Reducing both task complexity and the amount of task switching done by annotators is key to reducing the effort and time required to generate 3D bounding box annotations. This paper introduces a novel ground truth generation method that combines human supervision with pre-trained neural networks to generate per-instance 3D point cloud segmentation, 3D bounding boxes, and class annotations. The annotators provide object anchor clicks which behave as a seed to generate instance segmentation results in 3D. The points belonging to each instance are then used to regress object centroids, bounding box dimensions, and object orientation. Our proposed annotation scheme requires 30x lower human annotation time. We use the KITTI 3D object detection dataset [1] to evaluate the efficiency and the quality of our annotation scheme. We also test the the proposed scheme on previously unseen data from the Autonomoose self-driving vehicle to demonstrate generalization capabilities of the network.
Databáze: OpenAIRE