Autor: |
de la Cruz-Pavía I; Integrative Neuroscience and Cognition Center (INCC-UMR 8002), Université Paris Descartes (Sorbonne Paris Cité), France; Integrative Neuroscience and Cognition Center (INCC-UMR 8002), CNRS, France., Werker JF; Department of Psychology, University of British Columbia, Canada., Vatikiotis-Bateson E; Department of Linguistics, University of British Columbia, Canada., Gervain J; Integrative Neuroscience and Cognition Center (INCC-UMR 8002), Université Paris Descartes (Sorbonne Paris Cité), France; Integrative Neuroscience and Cognition Center (INCC-UMR 8002), CNRS, France. |
Jazyk: |
angličtina |
Zdroj: |
Language and speech [Lang Speech] 2020 Jun; Vol. 63 (2), pp. 264-291. Date of Electronic Publication: 2019 Apr 19. |
DOI: |
10.1177/0023830919842353 |
Abstrakt: |
The audiovisual speech signal contains multimodal information to phrase boundaries. In three artificial language learning studies with 12 groups of adult participants we investigated whether English monolinguals and bilingual speakers of English and a language with opposite basic word order (i.e., in which objects precede verbs) can use word frequency, phrasal prosody and co-speech (facial) visual information, namely head nods, to parse unknown languages into phrase-like units. We showed that monolinguals and bilinguals used the auditory and visual sources of information to chunk "phrases" from the input. These results suggest that speech segmentation is a bimodal process, though the influence of co-speech facial gestures is rather limited and linked to the presence of auditory prosody. Importantly, a pragmatic factor, namely the language of the context, seems to determine the bilinguals' segmentation, overriding the auditory and visual cues and revealing a factor that begs further exploration. |
Databáze: |
MEDLINE |
Externí odkaz: |
|