A Driver's Visual Attention Prediction Using Optical Flow
Autor: | Yeejin Lee, Byeongkeun Kang |
---|---|
Rok vydání: | 2021 |
Předmět: |
Automobile Driving
Computer science driver’s perception modeling Optical flow ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION visual attention estimation 02 engineering and technology TP1-1185 Optic Flow Biochemistry Convolutional neural network Motion (physics) Article Analytical Chemistry optical flow Motion 0203 mechanical engineering Margin (machine learning) convolutional neural networks 0202 electrical engineering electronic engineering information engineering Computer vision Electrical and Electronic Engineering Instrumentation Artificial neural network business.industry Movement (music) Chemical technology Work (physics) 020302 automobile design & engineering Atomic and Molecular Physics and Optics intelligent vehicle system RGB color model 020201 artificial intelligence & image processing Artificial intelligence Neural Networks Computer business |
Zdroj: | Sensors (Basel, Switzerland) Sensors Volume 21 Issue 11 Sensors, Vol 21, Iss 3722, p 3722 (2021) |
ISSN: | 1424-8220 |
Popis: | Motion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver’s visual attention allocation in computer vision. However, the fact that motion can be a crucial factor in a driver’s attention estimation has not been thoroughly studied in the literature, although driver’s attention prediction models focusing on scene appearance have been well studied. Therefore, in this work, we investigate the usefulness of motion information in estimating a driver’s visual attention. To analyze the effectiveness of motion information, we develop a deep neural network framework that provides attention locations and attention levels using optical flow maps, which represent the movements of contents in videos. We validate the performance of the proposed motion-based prediction model by comparing it to the performance of the current state-of-art prediction models using RGB frames. The experimental results for a real-world dataset confirm our hypothesis that motion plays a role in prediction accuracy improvement, and there is a margin for accuracy improvement by using motion features. |
Databáze: | OpenAIRE |
Externí odkaz: |