Multimodal Personality Recognition in Collaborative Goal-Oriented Tasks.

Autor: Batrinca, Ligia, Mana, Nadia, Lepri, Bruno, Sebe, Nicu, Pianesi, Fabio
Zdroj: IEEE Transactions on Multimedia; Apr2016, Vol. 18 Issue 4, p659-673, 15p
Abstrakt: Incorporating research on personality recognition into computers, both from a cognitive as well as an engineering perspective, would facilitate the interactions between humans and machines. Previous attempts on personality recognition have focused on a variety of different corpora (ranging from text to audiovisual data), scenarios (interviews, meetings), channels of communication (audio, video, text), and different subsets of personality traits (out of the five ones from the Big Five Model). Our study uses simple acoustic and visual nonverbal features extracted from multimodal data, which have been recorded in previously uninvestigated scenarios, and consider all five personality traits and not just a subset. First, we look at the human–machine interaction scenario, where we introduce the display of different “collaboration levels.” Second, we look at the contribution of the human–human interaction (HHI) scenario on the emergence of personality traits. Investigating the HHI scenario creates a stronger basis for future human-agents interactions. Our goal is to study, from a computational approach, the emergence degree of the five personality traits in these two scenarios. The results demonstrate the relevance of each of the two scenarios when it comes to the degree of emergence of certain traits and the feasibility to automatically recognize personality under different conditions. [ABSTRACT FROM PUBLISHER]
Databáze: Complementary Index