Improved itracker combined with bidirectional long short-term memory for 3D gaze estimation using appearance cues
Autor: | Zhuo Zhang, Honghai Liu, Xiaolong Zhou, Shen-Yong Chen, Zhanpeng Shao, Jianing Lin |
---|---|
Rok vydání: | 2020 |
Předmět: |
Estimation
0209 industrial biotechnology business.industry Computer science Cognitive Neuroscience Frame (networking) ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION 02 engineering and technology Gaze Computer Science Applications Long short term memory 020901 industrial engineering & automation Artificial Intelligence 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Computer vision Artificial intelligence business Image resolution |
Zdroj: | Zhou, X, Lin, J, Zhang, Z, Shao, Z, Chen, S & Liu, H 2019, ' Improved itracker combined with bidirectional long short-term memory for 3D gaze estimation using appearance cues ', Neurocomputing . https://doi.org/10.1016/j.neucom.2019.04.099 |
ISSN: | 0925-2312 |
DOI: | 10.1016/j.neucom.2019.04.099 |
Popis: | Gaze is an important non-verbal cue for speculating human’s attention, which has been widely employed in many human–computer interaction-based applications. In this paper, we propose an improved Itracker to predict the subject’s gaze for a single image frame, as well as employ a many-to-one bidirectional Long Short-Term Memory (bi-LSTM) to fit the temporal information between frames to estimate gaze for video sequence. For single image frame gaze estimation, we improve the conventional Itracker by removing the face-grid and reducing one network branch via concatenating the two-eye region images. Experimental results show that our improved Itracker obtains 11.6% significant improvement over the state-of-the-art methods on MPIIGaze dataset and has robust estimation accuracy for different image resolutions under the premise of greatly reducing network complexity. For video sequence gaze estimation, by employing the bi-LSTM to fit the temporal information between frames, experimental results on EyeDiap dataset further demonstrate 3% accuracy improvement. |
Databáze: | OpenAIRE |
Externí odkaz: |