Cortical Representations of Speech in a Multi-talker Auditory Scene
Autor: | Krishna C Puvvada, Jonathan Z. Simon |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2017 |
Předmět: |
0303 health sciences
Communication Auditory scene analysis medicine.diagnostic_test genetic structures business.industry media_common.quotation_subject Speech recognition Fidelity Magnetoencephalography Stimulus (physiology) Auditory cortex 03 medical and health sciences 0302 clinical medicine medicine.anatomical_structure Computational auditory scene analysis Perception medicine Auditory system business Psychology 030217 neurology & neurosurgery psychological phenomena and processes 030304 developmental biology media_common |
DOI: | 10.1101/124750 |
Popis: | The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically-based representations in the auditory nerve, into perceptually distinct auditory-objects based representation in auditory cortex. Here, using magnetoencephalography (MEG) recordings from human subjects, both men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in auditory cortex contain dominantly spectro-temporal based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. In contrast, we also show that higher order auditory cortical areas represent the attended stream separately, and with significantly higher fidelity, than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Taken together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of human auditory cortex.Significance StatementUsing magnetoencephalography (MEG) recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of auditory cortex. We show that the primary-like areas in auditory cortex use a dominantly spectro-temporal based representation of the entire auditory scene, with both attended and ignored speech streams represented with almost equal fidelity. In contrast, we show that higher order auditory cortical areas represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. |
Databáze: | OpenAIRE |
Externí odkaz: |