View Synthesis: LiDAR Camera versus Depth Estimation

Autor: Gauthier Lafruit, Yupeng Xie, Daniele Bonatto, Mehrdad Teratani, Sarah Fachada
Přispěvatelé: Skala, Václav
Rok vydání: 2021
Předmět:
Zdroj: International Conference on Computer Graphics, Visualization and Computer Vision 2021 (WSCG)
ISSN: 2464-4617
DOI: 10.24132/csrn.2021.3101.35
Popis: Depth-Image-Based Rendering (DIBR) can synthesize a virtual view image from a set of multiview images and corresponding depth maps. However, this requires an accurate depth map estimation that incurs a high compu- tational cost over several minutes per frame in DERS (MPEG-I’s Depth Estimation Reference Software) even by using a high-class computer. LiDAR cameras can thus be an alternative solution to DERS in real-time DIBR ap- plications. We compare the quality of a low-cost LiDAR camera, the Intel Realsense LiDAR L515 calibrated and configured adequately, with DERS using MPEG-I’s Reference View Synthesizer (RVS). In IV-PSNR, the LiDAR camera reaches 32.2dB view synthesis quality with a 15cm camera baseline and 40.3dB with a 2cm baseline. Though DERS outperforms the LiDAR camera with 4.2dB, the latter provides a better quality-performance trade- off. However, visual inspection demonstrates that LiDAR’s virtual views have even slightly higher quality than with DERS in most tested low-texture scene areas, except for object borders. Overall, we highly recommend using LiDAR cameras over advanced depth estimation methods (like DERS) in real-time DIBR applications. Neverthe- less, this requires delicate calibration with multiple tools further exposed in the paper.
Databáze: OpenAIRE