Popis: |
The goal of sensory substitution is to convey the information transduced by one sensory system through a novel sensory modality. One example is vibrotactile (VT) speech, for which acoustic speech is transformed into vibrotactile patterns. Despite an almost century-long history of studying vibrotactile speech, there has been no study of the neural bases of VT speech learning. We here trained hearing adult participants to recognize VT speech syllables. Using fMRI, we showed that both somatosensory (left post-central gyrus) and auditory (right temporal lobe) regions acquire selectivity for VT speech stimuli following training. The right planum temporale in particular was selective for both VT and auditory speech. EEG source-estimated activity revealed temporal dynamics consistent with direct, low-latency engagement of right temporal lobe following activation of the left post-central gyrus. Our results suggest that VT speech learning achieves integration with the auditory speech system by “piggybacking” onto corresponding auditory speech representations.Significance statementIn sensory substitution, the information conveyed by one sensory system is used to replace the function of another. Blind individuals, for example, can learn to use a visual-to-acoustic sensory substitution device to navigate the world. We tested the hypothesis that sensory substitution can more generally occur in typical individuals, by exploiting the existence of multi-sensory convergence areas in the brain. We trained hearing participants to recognize speech syllables presented as vibrotactile stimulation patterns. Using fMRI and EEG, we show that vibrotactile speech learning integrates with the auditory speech system by “piggybacking” onto corresponding auditory speech representations. These results suggest that the human brain can bootstrap new senses by leveraging one sensory system to process information normally processed by another. |