SparseOcc: Rethinking Sparse Latent Representation for Vision-Based Semantic Occupancy Prediction

Autor: Tang, Pin, Wang, Zhongdao, Wang, Guoqing, Zheng, Jilai, Ren, Xiangxuan, Feng, Bailan, Ma, Chao
Rok vydání: 2024
Předmět:
Zdroj: IEEE Conference on Computer Vision and Pattern Recognition 2024 (CVPR 2024)
Druh dokumentu: Working Paper
Popis: Vision-based perception for autonomous driving requires an explicit modeling of a 3D space, where 2D latent representations are mapped and subsequent 3D operators are applied. However, operating on dense latent spaces introduces a cubic time and space complexity, which limits scalability in terms of perception range or spatial resolution. Existing approaches compress the dense representation using projections like Bird's Eye View (BEV) or Tri-Perspective View (TPV). Although efficient, these projections result in information loss, especially for tasks like semantic occupancy prediction. To address this, we propose SparseOcc, an efficient occupancy network inspired by sparse point cloud processing. It utilizes a lossless sparse latent representation with three key innovations. Firstly, a 3D sparse diffuser performs latent completion using spatially decomposed 3D sparse convolutional kernels. Secondly, a feature pyramid and sparse interpolation enhance scales with information from others. Finally, the transformer head is redesigned as a sparse variant. SparseOcc achieves a remarkable 74.9% reduction on FLOPs over the dense baseline. Interestingly, it also improves accuracy, from 12.8% to 14.1% mIOU, which in part can be attributed to the sparse representation's ability to avoid hallucinations on empty voxels.
Comment: 10 pages, 4 figures, accepted by CVPR 2024
Databáze: arXiv