Modeling emotion in complex stories: the Stanford Emotional Narratives Dataset
Autor: | Isabella Kahhale, Tan Zhi-Xuan, Marianne C. Reddan, Desmond C. Ong, Jamil Zaki, Zhengxuan Wu, Alison M. Mattek |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Artificial Intelligence Computer Vision and Pattern Recognition (cs.CV) 05 social sciences Computer Science - Computer Vision and Pattern Recognition 02 engineering and technology Affect (psychology) 050105 experimental psychology Article Data modeling Human-Computer Interaction Artificial Intelligence (cs.AI) Recurrent neural network Discriminative model 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing 0501 psychology and cognitive sciences Memory model Set (psychology) Hidden Markov model Psychology Affective computing Software Cognitive psychology |
Zdroj: | IEEE Trans Affect Comput |
Popis: | Human emotions unfold over time, and more affective computing research has to prioritize capturing this crucial component of real-world affect. Modeling dynamic emotional stimuli requires solving the twin challenges of time-series modeling and of collecting high-quality time-series datasets. We begin by assessing the state-of-the-art in time-series emotion recognition, and we review contemporary time-series approaches in affective computing, including discriminative and generative models. We then introduce the first version of the Stanford Emotional Narratives Dataset (SENDv1): a set of rich, multimodal videos of self-paced, unscripted emotional narratives, annotated for emotional valence over time. The complex narratives and naturalistic expressions in this dataset provide a challenging test for contemporary time-series emotion recognition models. We demonstrate several baseline and state-of-the-art modeling approaches on the SEND, including a Long Short-Term Memory model and a multimodal Variational Recurrent Neural Network, which perform comparably to the human-benchmark. We end by discussing the implications for future research in time-series affective computing. 16 pages, 7 figures; accepted for publication at IEEE Transactions on Affective Computing |
Databáze: | OpenAIRE |
Externí odkaz: |