AdaFI-FCN: an adaptive feature integration fully convolutional network for predicting driver’s visual attention

Autor: Bowen Shi, Weihua Dong, Zhicheng Zhan
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: Geo-spatial Information Science, Vol 27, Iss 4, Pp 1309-1325 (2024)
Druh dokumentu: article
ISSN: 10095020
1993-5153
1009-5020
DOI: 10.1080/10095020.2022.2147028
Popis: Visual Attention Prediction (VAP) is widely applied in GIS research, such as navigation task identification and driver assistance systems. Previous studies commonly took color information to detect the visual saliency of natural scene images. However, these studies rarely considered adaptively feature integration to different geospatial scenes in specific tasks. To better predict visual attention while driving tasks, in this paper, we firstly propose an Adaptive Feature Integration Fully Convolutional Network (AdaFI-FCN) using Scene-Adaptive Weights (SAW) to integrate RGB-D, motion and semantic features. The quantitative comparison results on the DR(eye)VE dataset show that the proposed framework achieved the best accuracy and robustness performance compared with state-of-the-art models (AUC-Judd = 0.971, CC = 0.767, KL = 1.046, SIM = 0.579). In addition, the experimental results of the ablation study demonstrated the positive effect of the SAW method on the prediction robustness in response to scene changes. The proposed model has the potential to benefit adaptive VAP research in universal geospatial scenes, such as AR-aided navigation, indoor navigation, and street-view image reading.
Databáze: Directory of Open Access Journals