TFDepth: Self-Supervised Monocular Depth Estimation with Multi-Scale Selective Transformer Feature Fusion
Autor: | Hongli Hu, Jun Miao, Guanghui Zhu, Jie Yan, Jun Chu |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2024 |
Předmět: | |
Zdroj: | Image Analysis and Stereology, Vol 43, Iss 2 (2024) |
Druh dokumentu: | article |
ISSN: | 1580-3139 1854-5165 |
DOI: | 10.5566/ias.2987 |
Popis: | Existing self-supervised models for monocular depth estimation suffer from issues such as discontinuity, blurred edges, and unclear contours, particularly for small objects. We propose a self-supervised monocular depth estimation network with multi-scale selective Transformer feature fusion. To preserve more detailed features, this paper constructs a multi-scale encoder to extract features and leverages the self-attention mechanism of Transformer to capture global contextual information, enabling better depth prediction for small objects. Additionally, the multi-scale selective fusion module (MSSF) is also proposed, which can make full use of multi-scale feature information in the decoding part and perform selective fusion step by step, which can effectively eliminate noise and retain local detail features to obtain a clear depth map with clear edges. Experimental evaluations on the KITTI dataset demonstrate that the proposed algorithm achieves an absolute relative error (Abs Rel) of 0.098 and an accuracy rate (δ) of 0.983. The results indicate that the proposed algorithm not only estimates depth values with high accuracy but also predicts the continuous depth map with clear edges. |
Databáze: | Directory of Open Access Journals |
Externí odkaz: |