Autor: |
Saidrasul Usmankhujaev, Shokhrukh Baydadaev, Jang Woo Kwon |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
IEEE Access, Vol 12, Pp 169136-169148 (2024) |
Druh dokumentu: |
article |
ISSN: |
2169-3536 |
DOI: |
10.1109/ACCESS.2024.3488510 |
Popis: |
Strong perception, precise distance estimate, and dependable object tracking are necessary for autonomous vehicle systems to function. To improve autonomous driving capabilities, we describe a unique deep learning-based fusion architecture in this paper that combines LiDAR and camera data using Birds-Eye-View (BEV) representations. The system design addresses three main problems: object detection, object distance estimate and multi-object tracking. We employ a sensor fusion technique for perception to enhance 3D object detection in a variety of environmental settings. A triangulation-based approach is used for distance estimation, and BEV transformations are used to calculate object distances precisely. To achieve improved tracking reliability even in the presence of occlusions and changing environments, we finally utilized a multi-object tracking (MOT) approach that combines 3D bounding boxes and BEV maps. Experimental results on the KITTI dataset demonstrate the effectiveness of the proposed method, achieving a Multiple Object Tracking Accuracy (MOTA) of 83.9% and Multiple Object Tracking Precision (MOTP) of 84.2%, while significantly reducing false positives and false negatives compared to baseline approaches. These findings underline the potential of our approach to advance the safety and efficiency of autonomous driving systems. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|