Autor: |
Krishna G, Carnahan M, Shamapant S, Surendranath Y, Jain S, Ghosh A, Tran C, Millan JDR, Tewfik AH |
Jazyk: |
angličtina |
Zdroj: |
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference [Annu Int Conf IEEE Eng Med Biol Soc] 2021 Nov; Vol. 2021, pp. 6008-6014. |
DOI: |
10.1109/EMBC46164.2021.9629802 |
Abstrakt: |
In this paper, we propose a deep learning-based algorithm to improve the performance of automatic speech recognition (ASR) systems for aphasia, apraxia, and dysarthria speech by utilizing electroencephalography (EEG) features recorded synchronously with aphasia, apraxia, and dysarthria speech. We demonstrate a significant decoding performance improvement by more than 50% during test time for isolated speech recognition task and we also provide preliminary results indicating performance improvement for the more challenging continuous speech recognition task by utilizing EEG features. The results presented in this paper show the first step towards demonstrating the possibility of utilizing non-invasive neural signals to design a real-time robust speech prosthetic for stroke survivors recovering from aphasia, apraxia, and dysarthria. Our aphasia, apraxia, and dysarthria speech-EEG data set will be released to the public to help further advance this interesting and crucial research. |
Databáze: |
MEDLINE |
Externí odkaz: |
|