Popis: |
Identifying a person’s emotions through their facial expressions is necessary in helping us navigate social interactions. Some individuals, such as those with autism spectrum disorder (ASD), have difficulties in accurately identifying emotions from faces. These difficulties affect everyday interactions and contribute to the diagnostic phenotype. Facial emotion recognition (FER) tasks have been developed in an attempt to measure facial expression recognition abilities in both neurotypical and psychiatric populations in order to assess and quantify potential impairments. These tasks are also useful in retesting participants undergoing social skills or other training in order to track any improvements. FER tests to date often suffer from two limitations. First, stimulus sets used in established FER tasks are often limited to basic emotions (happiness, sadness, anger, disgust, fear, and surprise; plus neutral expressions as baseline), and add no or only few complex emotions (e.g., jealousy and boredom (Montagne et al., 2007)). Using only basic emotions to quantify FER abilities could potentially result in reduced external validity of the tasks by not accounting for the wide array of more complex emotions encountered in everyday life. Second, FER tasks often use static images of emotional faces, which do not capture naturalistic and dynamic aspects facial emotion recognition. Previously, the Face Puzzle tasks addressed these limitations by utilizing dynamic video stimuli featuring 15 actors portraying a wider variety of emotions to more closely approximate real life facial emotion recognition. (Kliemann et al., 2013). This stimulus set consisted of 25 videos of emotional facial expressions, with 5 basic (angry, happy, disgusted, fearful, surprised) and 20 complex emotions (interested, amused, aggrieved, troubled, jealous, enthusiastic, apologetic, disappointed, relieved, expectant, bored, compassionate, contemptuous, pardoning, embarrassed, wistful, furious, content, confident, doubtful), for a total of 11 positive and 14 negative emotions. In an initial validation study the Face Puzzle tasks showed good internal consistency, consistent external validity and sensitivity to impaired FER in adult individuals with ASD. Originally, the stimuli and task were designed in German, leaving it an open question whether intended emotion expressions and respective labels are valid in the English language, and thus whether the task is valid for use in English as well. The overall aim of this project is thus to validate the stimulus set and task design for the English language. In Study 1 of this project, believability, valence and arousal of video stimuli were rated and a new set of validated video stimuli was established (see Study 1 preregistration for details on the process; resulting emotion items are compassionate, bored, wistful, surprised, relieved, envious, furious, worried, enthusiastic, expectant, disgusted, angry, happy, forgiving, doubtful, content, embarrassed, disappointed, interested, fearful, confident, apologetic, contemptuous, amused, and touched). In Study 2, we determined construct validity of the items combined into the new Face Puzzle explicit task. The outcome of the procedure (see Study 2 for details) fell short of the original aim of a Cronbach’s alpha of 0.7 (Tavakol & Dennick, 2011) with a value of 0.683. It is possible that at least two factors might be relevant to evaluate this result. First, task performance in the Face puzzle task may be influenced by verbal intelligence and/or education levels. Second, we did not measure other emotional face processing or other social cognitive functioning tasks in online subjects, making it challenging to evaluate performance on the Face Puzzle task. To address these issues, we will conduct Study 3 as follows: We will assess the external validity of the English version of the Face Puzzle explicit task by relating the accuracy on the Face Puzzle explicit task with other established measures quantifying social cognitive abilities. We expect performance on the Face Puzzle explicit task to positively correlate with performance on the Reading the Mind in the Eyes Test (RMET; Baron-Cohen et al., 2001a; Hypothesis 1a), the Penn Emotion Recognition Test (ER-40; Kohler et al., 2003; Hypothesis 1b) and the Bell Lysaker Emotion Recognition Task (BLERT; Bell et al. 1997; Hypothesis 1c); and negatively correlate with scores on the Toronto Alexithymia scale (TAS-20, Bagby et al., 1994; Hypothesis 2a). Regarding the relationship with (verbal) intellectual functioning, we expect performance on the Face Puzzle explicit task to either show no relation to the verbal subscale of the Kaufman Brief Intelligence Test, 2nd edition (KBIT-2; Kaufman & Kaufman, 2004; Hypothesis 3a) or a weak positive correlation (Hypothesis 3b). Regarding the relationship with autistic traits, we expect performance on the Face Puzzle explicit task to negatively correlate with scores on the Autism Quotient (AQ; Baron-Cohen et al., 2001b; Hypothesis 4). |