Metacognition and Causal Inference in Audiovisual Speech

Autor: Faith Kimmet, Samantha Pedersen, Victoria Cardenas, Camila Rubiera, Grey Johnson, Addison Sans, Brian Odegaard
Rok vydání: 2022
Popis: In multisensory environments, our brains perform causal inference to estimate which sources produce specific sensory signals. Decades of research has revealed the dynamics which underlie this process of causal inference for multisensory (audiovisual) signals, including how temporal, spatial, and semantic relationships between stimuli influence the brain’s decision about whether to integrate or segregate. However, presently, very little is known about the relationship between metacognition and multisensory integration, and the characteristics of perceptual confidence for audiovisual signals. In this investigation, we ask two questions about the relationship between metacognition and multisensory causal inference: are observers’ confidence ratings for judgments about congruent, McGurk, and rarely integrated speech similar, or different? And do confidence judgments distinguish between these three scenarios when the perceived syllable is identical? To answer these questions, 92 online participants completed experiments where on each trial, participants reported which syllable they perceived, and rated confidence in their judgment. Results from Experiment 1 showed that confidence was highest for congruent speech and lower for McGurk and rarely integrated speech. In Experiment 2, when the perceived syllable for congruent and McGurk videos was matched, confidence scores were higher for congruent stimuli compared to McGurk stimuli. In Experiment 3, when the perceived syllable was matched between McGurk and rarely integrated stimuli, confidence judgments were similar between the two conditions. Together, these results provide evidence of the capacities and limitations of metacognition’s ability to index multisensory causal inference.
Databáze: OpenAIRE