Autor: |
Morgan Stuart, Srdjan Lesaja, Jerry J. Shih, Tanja Schultz, Milos Manic, Dean J. Krusienski |
Jazyk: |
angličtina |
Rok vydání: |
2022 |
Předmět: |
|
Zdroj: |
IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol 30, Pp 2783-2792 (2022) |
Druh dokumentu: |
article |
ISSN: |
1558-0210 |
DOI: |
10.1109/TNSRE.2022.3207624 |
Popis: |
Numerous state-of-the-art solutions for neural speech decoding and synthesis incorporate deep learning into the processing pipeline. These models are typically opaque and can require significant computational resources for training and execution. A deep learning architecture is presented that learns input bandpass filters that capture task-relevant spectral features directly from data. Incorporating such explainable feature extraction into the model furthers the goal of creating end-to-end architectures that enable automated subject-specific parameter tuning while yielding an interpretable result. The model is implemented using intracranial brain data collected during a speech task. Using raw, unprocessed timesamples, the model detects the presence of speech at every timesample in a causal manner, suitable for online application. Model performance is comparable or superior to existing approaches that require substantial signal preprocessing and the learned frequency bands were found to converge to ranges that are supported by previous studies. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|