CompletionFormer: Depth Completion with Convolutions and Vision Transformers

Autor: Youmin, Zhang, Xianda, Guo, Matteo, Poggi, Zheng, Zhu, Guan, Huang, Stefano, Mattoccia
Rok vydání: 2023
Předmět:
Druh dokumentu: Working Paper
Popis: Given sparse depths and the corresponding RGB images, depth completion aims at spatially propagating the sparse measurements throughout the whole image to get a dense depth prediction. Despite the tremendous progress of deep-learning-based depth completion methods, the locality of the convolutional layer or graph model makes it hard for the network to model the long-range relationship between pixels. While recent fully Transformer-based architecture has reported encouraging results with the global receptive field, the performance and efficiency gaps to the well-developed CNN models still exist because of its deteriorative local feature details. This paper proposes a Joint Convolutional Attention and Transformer block (JCAT), which deeply couples the convolutional attention layer and Vision Transformer into one block, as the basic unit to construct our depth completion model in a pyramidal structure. This hybrid architecture naturally benefits both the local connectivity of convolutions and the global context of the Transformer in one single model. As a result, our CompletionFormer outperforms state-of-the-art CNNs-based methods on the outdoor KITTI Depth Completion benchmark and indoor NYUv2 dataset, achieving significantly higher efficiency (nearly 1/3 FLOPs) compared to pure Transformer-based methods. Code is available at \url{https://github.com/youmi-zym/CompletionFormer}.
Comment: Accepted by CVPR 2023. Code: https://github.com/youmi-zym/CompletionFormer. Project: https://youmi-zym.github.io/projects/CompletionFormer/
Databáze: arXiv