Neural speech restoration at the cocktail party: Auditory cortex recovers masked speech of both attended and ignored speakers

Autor: Alex Jiao, L. Elliot Hong, Christian Brodbeck, Jonathan Z. Simon
Rok vydání: 2020
Předmět:
Male
0301 basic medicine
Time Factors
Computer science
Physiology
Audio Signal Processing
Speech recognition
Sensory Physiology
Social Sciences
computer.software_genre
Cortical processing
0302 clinical medicine
Medicine and Health Sciences
Attention
Biology (General)
Audio signal processing
0303 health sciences
Brain Mapping
medicine.diagnostic_test
Physics
General Neuroscience
Process (computing)
Magnetoencephalography
Brain
Middle Aged
Sensory Systems
Auditory System
Physical Sciences
Engineering and Technology
Cocktail party
Female
Anatomy
General Agricultural and Biological Sciences
psychological phenomena and processes
Research Article
Adult
Imaging Techniques
QH301-705.5
Bioacoustics
Neuroimaging
Biology
Research and Analysis Methods
Auditory cortex
Models
Biological

behavioral disciplines and activities
General Biochemistry
Genetics and Molecular Biology

Young Adult
03 medical and health sciences
Acoustic Signals
otorhinolaryngologic diseases
medicine
Humans
Speech
Active listening
030304 developmental biology
Auditory Cortex
General Immunology and Microbiology
Biology and Life Sciences
Linguistics
Acoustics
030104 developmental biology
Acoustic Stimulation
Speech Signal Processing
Signal Processing
computer
Binaural recording
030217 neurology & neurosurgery
Neuroscience
Zdroj: PLoS Biology, Vol 18, Iss 10, p e3000883 (2020)
PLoS Biology
ISSN: 1545-7885
Popis: Humans are remarkably skilled at listening to one speaker out of an acoustic mixture of several speech sources. Two speakers are easily segregated, even without binaural cues, but the neural mechanisms underlying this ability are not well understood. One possibility is that early cortical processing performs a spectrotemporal decomposition of the acoustic mixture, allowing the attended speech to be reconstructed via optimally weighted recombinations that discount spectrotemporal regions where sources heavily overlap. Using human magnetoencephalography (MEG) responses to a 2-talker mixture, we show evidence for an alternative possibility, in which early, active segregation occurs even for strongly spectrotemporally overlapping regions. Early (approximately 70-millisecond) responses to nonoverlapping spectrotemporal features are seen for both talkers. When competing talkers’ spectrotemporal features mask each other, the individual representations persist, but they occur with an approximately 20-millisecond delay. This suggests that the auditory cortex recovers acoustic features that are masked in the mixture, even if they occurred in the ignored speech. The existence of such noise-robust cortical representations, of features present in attended as well as ignored speech, suggests an active cortical stream segregation process, which could explain a range of behavioral effects of ignored background speech.
How do humans focus on one speaker when several are talking? MEG responses to a continuous two-talker mixture suggest that, even though listeners attend only to one of the talkers, their auditory cortex tracks acoustic features from both speakers. This occurs even when those features are locally masked by the other speaker.
Databáze: OpenAIRE