How Looking While Listening Affects Speech Segmentation

Autor: Jaime Leung
Rok vydání: 2018
Zdroj: Inquiry@Queen's Undergraduate Research Conference Proceedings.
ISSN: 2563-8912
DOI: 10.24908/iqurcp.8574
Popis: This study looks at the mechanisms behind how people learn words of a new language. Syllables that occur within words have a higher chance of occurring together than the syllables between words. Both infants and adults use these transitional probabilities to extract the words in language. However, previous research has examined speech segmentation when learners are presented just with speech. In natural context, we look while we listen and what we see is correlated with what we hear. The goal of my study was to explore how visual context affects adult speech segmentation. To do so, we have three conditions: one where adults were presented with only a word stream, one where while listening adults saw animations that corresponded to words they heard, and one where the animations that the adults saw did not correspond to the words they heard. One hypothesis is that participants in the audio-visual conditions perform better at the segmentation task because the statistical boundaries in the audio are reinforced by the visual boundaries between animations. However, it is also possible that the visual information impairs performance because learners engage in learning the meanings of words in addition to speech segmentation. Preliminary results support the latter hypothesis.
Databáze: OpenAIRE