Autor: |
Farhan Mahmood, Daehyeon Jeong, Jeha Ryu |
Jazyk: |
angličtina |
Rok vydání: |
2023 |
Předmět: |
|
Zdroj: |
IEEE Access, Vol 11, Pp 29263-29274 (2023) |
Druh dokumentu: |
article |
ISSN: |
2169-3536 |
DOI: |
10.1109/ACCESS.2023.3259992 |
Popis: |
Traffic accident anticipation is essential for successful autonomous and assistive driving systems. Existing accident anticipation algorithms that mostly rely on visual features of the accident related objects involved provides both high AP (Average Precision) and TTA (Time to Accident). Despite a spatiotemporal relationship with the visual features of the accident related objects involved, these methods are often biased and therefore not well generalizable. In this paper, firstly we discuss dataset biases and then show that those high AP and TTA results came mainly from visual biases. Secondly, to overcome some of the visual biases, we propose a novel deep learning framework that uses both visual and geometric information of the accident-related objects captured in dash cam videos. Thirdly, we show effectiveness of the proposed method in terms of generalization capability compared to existing approaches with several open datasets from actual accident videos. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|