Impacts of simplifying articulation movements imagery to speech imagery BCI performance

Autor: Zengzhi Guo, Fei Chen
Rok vydání: 2023
Předmět:
Zdroj: Journal of Neural Engineering. 20:016036
ISSN: 1741-2552
1741-2560
DOI: 10.1088/1741-2552/acb232
Popis: Objective. Speech imagery (SI) can be used as a reliable, natural, and user-friendly activation task for the development of brain-computer interface (BCI), which empowers individuals with severe disabilities to interact with their environment. The functional near-infrared spectroscopy (fNIRS) is advanced as one of the most suitable brain imaging methods for developing BCI systems owing to its advantages of being non-invasive, portable, insensitive to motion artifacts, and having relatively high spatial resolution. Approach. To improve the classification performance of SI BCI based on fNIRS, a novel paradigm was developed in this work by simplifying the articulation movements in SI to make the articulation movement differences clearer between different words imagery tasks. A SI BCI was proposed to directly answer questions by covertly rehearsing the word ‘是’ or ‘否’ (‘yes’ or ‘no’ in English), and an unconstrained rest task also was contained in this BCI. The articulation movements of SI were simplified by retaining only the movements of the jaw and lips of vowels in Chinese Pinyin for words ‘是’ and ‘否’. Main results. Compared with conventional speech imagery, simplifying the articulation movements in SI could generate more different brain activities among different tasks, which led to more differentiable temporal features and significantly higher classification performance. The average 3-class classification accuracies of the proposed paradigm across all 20 participants reached 69.6% and 60.2% which were about 10.8% and 5.6% significantly higher than those of the conventional SI paradigm operated in the 0–10 s and 0–2.5 s time windows, respectively. Significance. These results suggested that simplifying the articulation movements in SI is promising for improving the classification performance of intuitive BCIs based on speech imagery.
Databáze: OpenAIRE