Visual information constrains early and late stages of spoken-word recognition in sentence context

Autor: Angèle Brunellière, Salvador Soto-Faraco, Carolina Sánchez-García, Nara Ikumi
Přispěvatelé: Unité de Recherche en Sciences Cognitives et Affectives (URECA), Université de Lille, Sciences Humaines et Sociales-PRES Université Lille Nord de France
Jazyk: angličtina
Rok vydání: 2013
Předmět:
Male
Visual perception
Data Interpretation
Speech recognition
Word processing
Psycholinguistics
Visual speech
0302 clinical medicine
Evoked Potentials
Auditory
General Neuroscience
05 social sciences
Electroencephalography
Statistical
Fixation
Spoken-word recognition
Semantics
Semantic constraints
Neuropsychology and Physiological Psychology
Data Interpretation
Statistical

[SCCO.PSYC]Cognitive science/Psychology
Evoked Potentials
Auditory

Speech Perception
Visual Perception
Female
Psychology
Comprehension
Sentence
Event-related potentials
Adult
Speech perception
Adolescent
Context (language use)
Fixation
Ocular

Recognition (Psychology)
050105 experimental psychology
03 medical and health sciences
Young Adult
Phonetics
Physiology (medical)
Ocular
Humans
0501 psychology and cognitive sciences
Communication
business.industry
[SCCO.NEUR]Cognitive science/Neuroscience
Recognition
Psychology

N400
Reading
Acoustic Stimulation
Word recognition
business
030217 neurology & neurosurgery
Photic Stimulation
Zdroj: International Journal of Psychophysiology
International Journal of Psychophysiology, Elsevier, 2013, 89 (1), pp.136--147. ⟨10.1016/j.ijpsycho.2013.06.016⟩
Recercat. Dipósit de la Recerca de Catalunya
instname
ISSN: 0167-8760
1872-7697
Popis: Audiovisual speech perception has been frequently studied considering phoneme, syllable and word processing levels. Here, we examined the constraints that visual speech information might exert during the recognition of words embedded in a natural sentence context. We recorded event-related potentials (ERPs) to words that could be either strongly or weakly predictable on the basis of the prior semantic sentential context and, whose initial phoneme varied in the degree of visual saliency from lip movements. When the sentences were presented audio-visually (Experiment 1), words weakly predicted from semantic context elicited a larger long-lasting N400, compared to strongly predictable words. This semantic effect interacted with the degree of visual saliency over a late part of the N400. When comparing audio-visual versus auditory alone presentation (Experiment 2), the typical amplitude-reduction effect over the auditory-evoked N100 response was observed in the audiovisual modality. Interestingly, a specific benefit of high- versus low-visual saliency constraints occurred over the early N100 response and at the late N400 time window, confirming the result of Experiment 1. Taken together, our results indicate that the saliency of visual speech can exert an influence over both auditory processing and word recognition at relatively late stages, and thus suggest strong interactivity between audio-visual integration and other (arguably higher) stages of information processing during natural speech comprehension. This research was supported by the Spanish Ministry of Science and Innovation (PSI2010-15426 and Consolider INGENIO CSD2007-00012), Comissionat per a Universitats i Recerca del DIUE-Generalitat de Catalunya (SGR2009-092), and the European Research Council (StG-2010263145).
Databáze: OpenAIRE