Children flexibly seek visual information during signed and spoken language comprehension
Autor: | Kyle Earl MacDonald, Virginia A. Marchman, Anne Fernald, Michael C. Frank |
---|---|
Rok vydání: | 2018 |
Předmět: |
PsyArXiv|Social and Behavioral Sciences|Cognitive Psychology|Language
PsyArXiv|Social and Behavioral Sciences PsyArXiv|Social and Behavioral Sciences|Developmental Psychology|Language Aquisition bepress|Social and Behavioral Sciences PsyArXiv|Social and Behavioral Sciences|Cognitive Psychology PsyArXiv|Social and Behavioral Sciences|Developmental Psychology bepress|Social and Behavioral Sciences|Psychology|Child Psychology bepress|Social and Behavioral Sciences|Psychology|Developmental Psychology bepress|Social and Behavioral Sciences|Psychology|Cognitive Psychology |
DOI: | 10.31234/osf.io/2r95b |
Popis: | During grounded language comprehension, listeners must link the incoming linguistic signal to the visual world despite noise in the input. Information gathered through visual fixations can facilitate understanding. But do listeners flexibly seek supportive visual information? Here, we propose that even young children can adapt their gaze and actively gather information that supports their language understanding. We present two case studies of eye movements during real-time language processing where the value of fixating on a social partner varies across different contexts. First, compared to children learning spoken English (n=80), young American Sign Language (ASL) learners (n=30) delayed gaze shifts away from a language source and produced a higher proportion of language-consistent eye movements. This result suggests that ASL learners adapt to dividing attention between language and referents, which both compete for processing via the same channel: vision. Second, English-speaking preschoolers (n=39) and adults (n=31) delayed the timing of gaze shifts away from a speaker’s face while processing language in a noisy auditory environment. This delay resulted in a higher proportion of language-consistent gaze shifts. These results suggest that young listeners can adapt their gaze to seek supportive visual information from social partners during real-time language comprehension. |
Databáze: | OpenAIRE |
Externí odkaz: |