Autor: |
Deng, Yulin, Yin, Liju, Gao, Xiaoning, Zhou, Hui, Wang, Zhenzhou, Zou, Guofeng |
Předmět: |
|
Zdroj: |
Visual Computer; Jun2024, Vol. 40 Issue 6, p4441-4456, 16p |
Abstrakt: |
The implementation of 3D reconstruction for targets in the low-light-level (LLL) environment is an immediate requirement in military, aerospace and other fields related to this environment. However, in such a photon-deficient environment, the amount of available information is extremely limited; thus, leading to the 3D reconstruction task in this environment is challenging. To address this issue, an embeddable converged front- and back-end network (EC-FBNet) is proposed in this paper, it can extract sparse information from the LLL environment by aggregating multi-layer semantic, then according to the similarity of features among object parts, to calculate the global topology structure of the 3D model. For the training approach, the EC-FBNet performs the two-stage integrated training modality. We additionally construct an embedded global inferential attention module (GIAM), to distribute the association weights among the points in the model, and thus reason out the global topology structure of the 3D model. In order to acquire realistic images in the LLL environment, this study leverages the multi-pixel photon counter (MPPC) detector to capture stable photon counting images in this environment, then packages into a dataset for training by the network. In experiment, the proposed approach not only achieves results superior to the state-of-the-art approaches, but also competitive in the quality of the reconstructed model. We believe that this approach can be a useful tool for 3D reconstruction field in the LLL environment. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|