Visemic processing in audiovisual discrimination of natural speech: A simultaneous fMRI–EEG study

Autor: Daniel Gounot, Marie-Noëlle Metz-Lutz, Hélène Otzenberger, Rudolph Sock, Cyril Dubois
Rok vydání: 2012
Předmět:
Zdroj: Neuropsychologia. 50:1316-1326
ISSN: 0028-3932
Popis: In a noisy environment, visual perception of articulatory movements improves natural speech intelligibility. Parallel to phonemic processing based on auditory signal, visemic processing constitutes a counterpart based on "visemes", the distinctive visual units of speech. Aiming at investigating the neural substrates of visemic processing in a disturbed environment, we carried out a simultaneous fMRI-EEG experiment based on discriminating syllabic minimal pairs involving three phonological contrasts, each bearing on a single phonetic feature characterised by different degrees of visual distinctiveness. The contrasts involved either labialisation of the vowels, or place of articulation or voicing of the consonants. Audiovisual consonant-vowel syllable pairs were presented either with a static facial configuration or with a dynamic display of articulatory movements related to speech production. In the sound-disturbed MRI environment, the significant improvement of syllabic discrimination achieved in the dynamic audiovisual modality, compared to the static audiovisual modality was associated with activation of the occipito-temporal cortex (MT+V5) bilaterally, and of the left premotor cortex. While the former was activated in response to facial movements independently of their relation to speech, the latter was specifically activated by phonological discrimination. During fMRI, significant evoked potential responses to syllabic discrimination were recorded around 150 and 250 ms following the onset of the second stimulus of the pairs, whose amplitude was greater in the dynamic compared to the static audiovisual modality. Our results provide arguments for the involvement of the speech motor cortex in phonological discrimination, and suggest a multimodal representation of speech units.
Databáze: OpenAIRE