An Emotion-Embedded Visual Attention Model for Dimensional Emotion Context Learning

Autor: Yuhao Tang, Qirong Mao, Hongjie Jia, Heping Song, Yongzhao Zhan
Jazyk: angličtina
Rok vydání: 2019
Předmět:
Zdroj: IEEE Access, Vol 7, Pp 72457-72468 (2019)
Druh dokumentu: article
ISSN: 2169-3536
DOI: 10.1109/ACCESS.2019.2911714
Popis: Dimensional emotion recognition has attracted more and more researchers' attention from various fields including psychology, cognition, and computer science. In this paper, we propose an emotion-embedded visual attention model (EVAM) to learn emotion context information for predicting affective dimension values from video sequences. First, deep CNN is used to generate a high-level representation of the raw face images. Second, a visual attention model based on the gated recurrent unit (GRU) is employed to learn the context information of the feature sequences from face features. Third, the k-means algorithm is adapted to embed previous emotion into attention model to produce more robust time series predictions, which emphasize the influence of previous emotion on current effective prediction. In this paper, all experiments are carried out on database AVEC 2016 and AVEC 2017. The experimental results validate the efficiency of our method, and competitive results are obtained.
Databáze: Directory of Open Access Journals