Sign Language Recognition Using Graph and General Deep Neural Network Based on Large Scale Dataset

Autor: Abu Saleh Musa Miah, Md. Al Mehedi Hasan, Satoshi Nishimura, Jungpil Shin
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: IEEE Access, Vol 12, Pp 34553-34569 (2024)
Druh dokumentu: article
ISSN: 2169-3536
DOI: 10.1109/ACCESS.2024.3372425
Popis: Sign Language Recognition (SLR) represents a revolutionary technology aiming to establish communication between hearing impaired and non-hearing impaired communities, surpassing traditional interpreter-based approaches. Existing efforts in automatic sign recognition predominantly rely on hand skeleton joint information, steering clear of image pixels to address challenges like partial occlusion and redundant backgrounds. Many researchers have been working to develop automatic sign recognition using hand skeleton joint information instead of image pixels to overcome partial occlusion and redundant background problems. However, body motion and facial expression play an essential role in increasing the inner gesture variance in expressing sign language emotion besides hand information for large-scale sign word datasets. Recently, some researchers have been working to develop muti-gesture-based SLR recognition systems, but their performance accuracy and efficiency are unsatisfactory for real-time deployment. Addressing these limitations, we propose a novel approach, a two-stream multistage graph convolution with attention and residual connection (GCAR) designed to extract spatial-temporal contextual information. The multistage GCAR system, incorporating a channel attention module, dynamically enhances attention levels, particularly for non-connected skeleton points during specific events within spatial-temporal features. The methodology involves capturing joint skeleton points and motion, offering a comprehensive understanding of a person’s entire body movement during sign language gestures and feeding this information into two streams. In the first stream, joint key features undergo processing through sep-TCN, graph convolution, deep learning layer, and a channel attention module across multiple stages, generating intricate spatial-temporal features in sign language gestures. Simultaneously, the joint motion is processed in the second stream, mirroring the steps of the first branch. The fusion of these two features yields the final feature vector, which is then fed into the classification module. The model excels in capturing discriminative structural displacements and short-range dependencies by leveraging unified joint features projected onto a high-dimensional space. Owing to the effectiveness of these features, the proposed method achieved significant accuracies: 90.31%, 94.10%, 99.75%, and 34.41%, for the WLASL, PSL, MSL, and ASLLVD large-scale datasets, respectively, with 0.69 million parameters. The high-performance accuracy, coupled with stable computational complexity, demonstrates the superiority of the proposed model. This innovative approach is anticipated to redefine the landscape of sign language recognition, setting a new standard in the field.
Databáze: Directory of Open Access Journals