Autor: |
Zongxing Lu, Shaoxiong Cai, Bingxing Chen, Zhoujie Liu, Lin Guo, Ligang Yao |
Jazyk: |
angličtina |
Rok vydání: |
2022 |
Předmět: |
|
Zdroj: |
IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol 30, Pp 2623-2629 (2022) |
Druh dokumentu: |
article |
ISSN: |
1558-0210 |
DOI: |
10.1109/TNSRE.2022.3205026 |
Popis: |
A-mode ultrasound has the advantages of high resolution, easy calculation and low cost in predicting dexterous gestures. In order to accelerate the popularization of A-mode ultrasound gesture recognition technology, we designed a human-machine interface that can interact with the user in real-time. Data processing includes Gaussian filtering, feature extraction and PCA dimensionality reduction. The NB, LDA and SVM algorithms were selected to train machine learning models. The whole process was written in C++ to classify gestures in real-time. This paper conducts offline and real-time experiments based on HMI-A (Human-machine interface based on A-mode ultrasound), including ten subjects and ten common gestures. To demonstrate the effectiveness of HMI-A and avoid accidental interference, the offline experiment collected ten rounds of gestures for each subject for ten-fold cross-validation. The results show that the offline recognition accuracy is 96.92% ± 1.92%. The real-time experiment was evaluated by four online performance metrics: action selection time, action completion time, action completion rate and real-time recognition accuracy. The results show that the action completion rate is 96.0% ± 3.6%, and the real-time recognition accuracy is 83.8% ± 6.9%. This study verifies the great potential of wearable A-mode ultrasound technology, and provides a wider range of application scenarios for gesture recognition. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|