Representation of internal speech by single neurons in human supramarginal gyrus.

Autor: Wandelt SK; Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA. skwandelt@gmail.com.; T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA. skwandelt@gmail.com., Bjånes DA; Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.; T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA.; Rancho Los Amigos National Rehabilitation Center, Downey, CA, USA., Pejsa K; Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.; T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA., Lee B; Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.; Department of Neurological Surgery, Keck School of Medicine of USC, Los Angeles, CA, USA.; USC Neurorestoration Center, Keck School of Medicine of USC, Los Angeles, CA, USA., Liu C; Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.; Rancho Los Amigos National Rehabilitation Center, Downey, CA, USA.; Department of Neurological Surgery, Keck School of Medicine of USC, Los Angeles, CA, USA.; USC Neurorestoration Center, Keck School of Medicine of USC, Los Angeles, CA, USA., Andersen RA; Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.; T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA.
Jazyk: angličtina
Zdroj: Nature human behaviour [Nat Hum Behav] 2024 Jun; Vol. 8 (6), pp. 1136-1149. Date of Electronic Publication: 2024 May 13.
DOI: 10.1038/s41562-024-01867-y
Abstrakt: Speech brain-machine interfaces (BMIs) translate brain signals into words or audio outputs, enabling communication for people having lost their speech abilities due to diseases or injury. While important advances in vocalized, attempted and mimed speech decoding have been achieved, results for internal speech decoding are sparse and have yet to achieve high functionality. Notably, it is still unclear from which brain areas internal speech can be decoded. Here two participants with tetraplegia with implanted microelectrode arrays located in the supramarginal gyrus (SMG) and primary somatosensory cortex (S1) performed internal and vocalized speech of six words and two pseudowords. In both participants, we found significant neural representation of internal and vocalized speech, at the single neuron and population level in the SMG. From recorded population activity in the SMG, the internally spoken and vocalized words were significantly decodable. In an offline analysis, we achieved average decoding accuracies of 55% and 24% for each participant, respectively (chance level 12.5%), and during an online internal speech BMI task, we averaged 79% and 23% accuracy, respectively. Evidence of shared neural representations between internal speech, word reading and vocalized speech processes was found in participant 1. SMG represented words as well as pseudowords, providing evidence for phonetic encoding. Furthermore, our decoder achieved high classification with multiple internal speech strategies (auditory imagination/visual imagination). Activity in S1 was modulated by vocalized but not internal speech in both participants, suggesting no articulator movements of the vocal tract occurred during internal speech production. This work represents a proof-of-concept for a high-performance internal speech BMI.
(© 2024. The Author(s).)
Databáze: MEDLINE