Spatial and Temporal Hand-Raising Recognition from Classroom Videos Using Locality, Relative Position-Aware Non-Local Networks and Hand Tracking

Autor: Thu-Hien Le, Hoang-Nhat Tran, Phuong-Dung Nguyen, Hong-Quan Nguyen, Thuy-Binh Nguyen, Thanh-Hai Tran, Hai Vu, Thi-Thao Tran, Thi-Lan Le
Jazyk: angličtina
Rok vydání: 2023
Předmět:
Zdroj: Vietnam Journal of Computer Science, Vol 10, Iss 02, Pp 243-271 (2023)
Druh dokumentu: article
ISSN: 21968888
2196-8896
2196-8888
DOI: 10.1142/S2196888822500397
Popis: Hand-raising gesture is one of the most popular signs of communication, whose frequency is related to the classroom’s atmosphere, the attractiveness of the subject, and the level of interactions between students and teachers. However, automatic hand-raising gesture detection and recognition remains still a challenging problem mainly due to low hand resolution, hand occlusion, various backgrounds, and viewpoint changes. While majority of the existing methods focus on static hand-raising posture detection, in this paper, we propose a framework for dynamic hand gesture recognition from classroom videos consisting of two main stages: hand posture detection and dynamic hand gesture recognition. In hand posture detection stage, we extend the previous work by adding relative position-aware in non-local network. After detecting the hand-raising posture on static pixels, which in turn accelerates the performance images, at the dynamic hand gesture recognition stage, we incorporate object detection and tracking to associate the hand-raising detection results at consecutive frames to supplement the missing detection due to the occlusion issue and obtain hand-raising gesture recognition at the event level. The experimental results show that the proposed method outperforms three benchmark models for static hand posture detection that are Faster-RCNN, Libra-RCNN, and Libra-RCNN+RDA on our dataset with the gains obtained for mAP being 7.68%, 5.76%, and 0.35% higher, respectively. In terms of event-level recognition results, the proposed method achieves the value of frame-wise accuracy, temporal_IoU, F1-score@0.3, and Levenshtein-score are 90.0%, 84.4%, 83.2%, and 84.3%, respectively. The code and dataset used in the paper will be made publicity available for research community.
Databáze: Directory of Open Access Journals