Autor: |
Zunwang Ke, Chenyu Lin, Tao Zhang, Tingting Jia, Minghua Du, Gang Wang, Yugui Zhang |
Jazyk: |
angličtina |
Rok vydání: |
2025 |
Předmět: |
|
Zdroj: |
Alexandria Engineering Journal, Vol 111, Iss , Pp 123-135 (2025) |
Druh dokumentu: |
article |
ISSN: |
1110-0168 |
DOI: |
10.1016/j.aej.2024.10.061 |
Popis: |
In the field of autonomous driving, the accuracy and real-time requirements for 3D object detection technology continue to improve, which is directly related to the commercialization process and market popularity of autonomous vehicles. Despite the efficiency of pillar-based coding for onboard systems, it falls short in terms of accuracy and the reduction of incorrect positives. In this paper, we will examine how to solve the problem of high incorrect rate and low accuracy of existing methods. Firstly, a MAP coding module is introduced to optimize previous point cloud feature coding modules, allowing for the efficient extraction of fine-grained features from point cloud data. Then, we introduce an innovative sparse dual attention (SDA) to efficiently filter out irrelevant details in feature extraction, thereby improving the pertinence and efficiency of information extraction. Finally, to address the potential loss of information from single local feature extraction, a local and global fusion module (CTGC) is introduced. Our method has proactively demonstrated its efficiency and accuracy through rigorous experimentation across diverse datasets. Analysis of the results leads to the conclusion that our solutions provide accurate and robust detection results. Code will be available at https://github.com/lcy199905/MyOpenPCDet.git. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|