Facial expressions can be categorized along the upper-lower facial axis, from a perceptual perspective
Autor: | Suyan Guo, Nianxin Guo, Xun Zhu, Chao Ma, Yantian Hou, Faraday Davies |
---|---|
Rok vydání: | 2021 |
Předmět: |
Linguistics and Language
media_common.quotation_subject Emotions Experimental and Cognitive Psychology Superordinate goals 050105 experimental psychology Language and Linguistics 03 medical and health sciences 0302 clinical medicine Face perception Perception Reaction Time Humans 0501 psychology and cognitive sciences Sensory cue media_common Facial expression 05 social sciences Perspective (graphical) Sensory Systems Expression (mathematics) Facial Expression Categorization Face Psychology Photic Stimulation 030217 neurology & neurosurgery Cognitive psychology |
Zdroj: | Attention, Perception, & Psychophysics. 83:2159-2173 |
ISSN: | 1943-393X 1943-3921 |
Popis: | A critical question, fundamental for building models of emotion, is how to categorize emotions. Previous studies have typically taken one of two approaches: (a) they focused on the pre-perceptual visual cues, how salient facial features or configurations were displayed; or (b) they focused on the post-perceptual affective experiences, how emotions affected behavior. In this study, we attempted to group emotions at a peri-perceptual processing level: it is well known that humans perceive different facial expressions differently, therefore, can we classify facial expressions into distinct categories in terms of their perceptual similarities? Here, using a novel non-lexical paradigm, we assessed the perceptual dissimilarities between 20 facial expressions using reaction times. Multidimensional-scaling analysis revealed that facial expressions were organized predominantly along the upper-lower face axis. Cluster analysis of behavioral data delineated three superordinate categories, and eye-tracking measurements validated these clustering results. Interestingly, these superordinate categories can be conceptualized according to how facial displays interact with acoustic communications: One group comprises expressions that have salient mouth features. They likely link to species-specific vocalization, for example, crying, laughing. The second group comprises visual displays with diagnosing features in both the mouth and the eye regions. They are not directly articulable but can be expressed prosodically, for example, sad, angry. Expressions in the third group are also whole-face expressions but are completely independent of vocalization, and likely being blends of two or more elementary expressions. We propose a theoretical framework to interpret the tripartite division in which distinct expression subsets are interpreted as successive phases in an evolutionary chain. |
Databáze: | OpenAIRE |
Externí odkaz: |