Improvements to Target-Based 3D LiDAR to Camera Calibration

Autor: Jiunn-Kai Huang, Jessy W. Grizzle
Jazyk: angličtina
Rok vydání: 2019
Předmět:
FOS: Computer and information sciences
0209 industrial biotechnology
LiDAR
General Computer Science
Computer science
Computer Vision and Pattern Recognition (cs.CV)
Point cloud
Computer Science - Computer Vision and Pattern Recognition
02 engineering and technology
Translation (geometry)
computer vision
Computer Science - Robotics
020901 industrial engineering & automation
camera-LiDAR calibration
0202 electrical engineering
electronic engineering
information engineering

General Materials Science
Computer vision
Quantization (image processing)
Projection (set theory)
Pose
business.industry
extrinsic calibration
General Engineering
Sensor fusion
Lidar
Transformation (function)
Calibration
020201 artificial intelligence & image processing
lcsh:Electrical engineering. Electronics. Nuclear engineering
Artificial intelligence
business
lcsh:TK1-9971
Robotics (cs.RO)
camera
Camera resectioning
Zdroj: IEEE Access, Vol 8, Pp 134101-134110 (2020)
Popis: The rigid-body transformation between a LiDAR and monocular camera is required for sensor fusion tasks, such as SLAM. While determining such a transformation is not considered glamorous in any sense of the word, it is nonetheless crucial for many modern autonomous systems. Indeed, an error of a few degrees in rotation or a few percent in translation can lead to 20 cm reprojection errors at a distance of 5 m when overlaying a LiDAR image on a camera image. The biggest impediments to determining the transformation accurately are the relative sparsity of LiDAR point clouds and systematic errors in their distance measurements. This paper proposes (1) the use of targets of known dimension and geometry to ameliorate target pose estimation in face of the quantization and systematic errors inherent in a LiDAR image of a target, (2) a fitting method for the LiDAR to monocular camera transformation that avoids the tedious task of target edge extraction from the point cloud, and (3) a “cross-validation study” based on projection of the 3D LiDAR target vertices to the corresponding corners in the camera image. The end result is a 50% reduction in projection error and a 70% reduction in its variance with respect to baseline.
Databáze: OpenAIRE