Emotional state dependence facilitates automatic imitation of visual speech
Autor: | Jasmine Virhia, Sonja A. Kotz, Patti Adank |
---|---|
Přispěvatelé: | Section Neuropsychology, RS: FPN NPPP I |
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
Adult
Speech production Physiology speech production Emotions emotion Experimental and Cognitive Psychology Emotional valence Executive Function Young Adult COGNITIVE CONTROL Physiology (medical) State dependence Humans Speech Attention MODULATION General Psychology DISTORTION RECOGNITION Cognition General Medicine Imitative Behavior COMPATIBILITY REPRESENTATIONS Neuropsychology and Physiological Psychology Facial mimicry EXCITABILITY Visual Perception FACIAL MIMICRY Imitation Stimulus–response compatibility Psychology control BEHAVIOR Cognitive psychology RESPONSES |
Zdroj: | Quarterly Journal of Experimental Psychology, 72(12), 2833-2847. Psychology Press Ltd |
ISSN: | 1747-0218 |
DOI: | 10.1177/1747021819867856 |
Popis: | Observing someone speak automatically triggers cognitive and neural mechanisms required to produce speech, a phenomenon known as automatic imitation. Automatic imitation of speech can be measured using the Stimulus Response Compatibility (SRC) paradigm that shows facilitated response times (RTs) when responding to a prompt (e.g., say aa) in the presence of a congruent distracter (a video of someone saying aa), compared with responding in the presence of an incongruent distracter (a video of someone saying oo). Current models of the relation between emotion and cognitive control suggest that automatic imitation can be modulated by varying the stimulus-driven task aspects, that is, the distracter’s emotional valence. It is unclear how the emotional state of the observer affects automatic imitation. The current study explored independent effects of emotional valence of the distracter (Stimulus-driven Dependence) and the observer’s emotional state (State Dependence) on automatic imitation of speech. Participants completed an SRC paradigm for visual speech stimuli. They produced a prompt superimposed over a neutral or emotional (happy or angry) distracter video. State Dependence was manipulated by asking participants to speak the prompt in a neutral or emotional (happy or angry) voice. Automatic imitation was facilitated for emotional prompts, but not for emotional distracters, thus implying a facilitating effect of State Dependence. The results are interpreted in the context of theories of automatic imitation and cognitive control, and we suggest that models of automatic imitation are to be modified to accommodate for state-dependent and stimulus-driven dependent effects. |
Databáze: | OpenAIRE |
Externí odkaz: |