Autor: |
Liangchao Guo, Lin Chen, Qiufu Wang, Zhuo Zhang, Xiaoliang Sun |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Drones, Vol 8, Iss 11, p 626 (2024) |
Druh dokumentu: |
article |
ISSN: |
2504-446X |
DOI: |
10.3390/drones8110626 |
Popis: |
The autonomous landing of unmanned aerial vehicles (UAVs) relies on a precise relative 6D pose between platforms. Existing model-based monocular pose estimation methods need an accurate 3D model of the target. They cannot handle the absence of an accurate 3D model. This paper adopts the multi-view geometry constraints within the monocular image sequence to solve the problem. And a novel approach to monocular pose estimation is introduced, which jointly optimizes the target’s 3D model and the relative 6D pose. We propose to represent the target’s 3D model using a set of sparse 3D landmarks. The 2D landmarks are detected in the input image by a trained neural network. Based on the 2D–3D correspondences, the initial pose estimation is obtained by solving the PnP problem. To achieve joint optimization, this paper builds the objective function based on the minimization of the reprojection error. And the correction values of the 3D landmarks and the 6D pose are parameters to be solved in the optimization problem. By solving the optimization problem, the joint optimization of the target’s 3D model and the 6D pose is realized. In addition, a sliding window combined with a keyframe extraction strategy is adopted to speed up the algorithm processing. Experimental results on synthetic and real image sequences show that the proposed method achieves real-time and online high-precision monocular pose estimation with the absence of an accurate 3D model via the joint optimization of the target’s 3D model and pose. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|