A high-performance neuroprosthesis for speech decoding and avatar control.
Autor: | Metzger SL; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.; Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.; University of California, Berkeley-University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA., Littlejohn KT; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.; Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.; Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA., Silva AB; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.; Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.; University of California, Berkeley-University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA., Moses DA; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.; Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA., Seaton MP; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA., Wang R; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.; Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA., Dougherty ME; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA., Liu JR; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.; Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.; University of California, Berkeley-University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA., Wu P; Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA., Berger MA; Speech Graphics Ltd, Edinburgh, UK., Zhuravleva I; Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA., Tu-Chan A; Department of Neurology, University of California, San Francisco, San Francisco, CA, USA., Ganguly K; Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.; Department of Neurology, University of California, San Francisco, San Francisco, CA, USA., Anumanchipalli GK; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.; Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.; Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA., Chang EF; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA. edward.chang@ucsf.edu.; Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA. edward.chang@ucsf.edu.; University of California, Berkeley-University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA. edward.chang@ucsf.edu. |
---|---|
Jazyk: | angličtina |
Zdroj: | Nature [Nature] 2023 Aug; Vol. 620 (7976), pp. 1037-1046. Date of Electronic Publication: 2023 Aug 23. |
DOI: | 10.1038/s41586-023-06443-4 |
Abstrakt: | Speech neuroprostheses have the potential to restore communication to people living with paralysis, but naturalistic speed and expressivity are elusive 1 . Here we use high-density surface recordings of the speech cortex in a clinical-trial participant with severe limb and vocal paralysis to achieve high-performance real-time decoding across three complementary speech-related output modalities: text, speech audio and facial-avatar animation. We trained and evaluated deep-learning models using neural data collected as the participant attempted to silently speak sentences. For text, we demonstrate accurate and rapid large-vocabulary decoding with a median rate of 78 words per minute and median word error rate of 25%. For speech audio, we demonstrate intelligible and rapid speech synthesis and personalization to the participant's pre-injury voice. For facial-avatar animation, we demonstrate the control of virtual orofacial movements for speech and non-speech communicative gestures. The decoders reached high performance with less than two weeks of training. Our findings introduce a multimodal speech-neuroprosthetic approach that has substantial promise to restore full, embodied communication to people living with severe paralysis. (© 2023. The Author(s), under exclusive licence to Springer Nature Limited.) |
Databáze: | MEDLINE |
Externí odkaz: |