Autor: |
Jungpil Shin, Abu Saleh Musa Miah, Sota Konnai, Itsuki Takahashi, Koki Hirooka |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Scientific Reports, Vol 14, Iss 1, Pp 1-13 (2024) |
Druh dokumentu: |
article |
ISSN: |
2045-2322 |
DOI: |
10.1038/s41598-024-72996-7 |
Popis: |
Abstract Hand gesture recognition based on sparse multichannel surface electromyography (sEMG) still poses a significant challenge to deployment as a muscle–computer interface. Many researchers have been working to develop an sEMG-based hand gesture recognition system. However, the existing system still faces challenges in achieving satisfactory performance due to ineffective feature enhancement, so the prediction is erratic and unstable. To comprehensively tackle these challenges, we introduce a novel approach: a lightweight sEMG-based hand gesture recognition system using a 4-stream deep learning architecture. Each stream strategically combines Temporal Convolutional Network (TCN)-based time-varying features with Convolutional Neural Network (CNN)-based frame-wise features. In the first stream, we harness the power of the TCN module to extract nuanced time-varying temporal features. The second stream integrates a hybrid Long short-term memory (LSTM)-TCN module. This stream extracts temporal features using LSTM and seamlessly enhances them with TCN to effectively capture intricate long-range temporal relations. The third stream adopts a spatio-temporal strategy, merging the CNN and TCN modules. This integration facilitates concurrent comprehension of both spatial and temporal features, enriching the model’s understanding of the underlying dynamics of the data. The fourth stream uses a skip connection mechanism to alleviate potential problems of data loss, ensuring a robust information flow throughout the network and concatenating the 4 stream features, yielding a comprehensive and effective final feature representation. We employ a channel attention-based feature selection module to select the most effective features, aiming to reduce the computational complexity and feed them into the classification module. The proposed model achieves an average accuracy of 94.31% and 98.96% on the Ninapro DB1 and DB9 datasets, respectively. This high-performance accuracy proves the superiority of the proposed model, and its implications extend to enhancing the quality of life for individuals using prosthetic limbs and advancing control systems in the field of robotic human–machine interfaces. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|