Word Level Sign Language Translation using Deep Learning.

Autor: Pathrikar, Vighnesh, Podutwar, Tejas, Siddannavar, Akshay, Mandana, Akash, Rajeswari, K., Vispute, S. R., Vivekanandan, N.
Předmět:
Zdroj: Journal of Engineering Science & Technology Review; 2023, Vol. 16 Issue 4, p180-187, 8p
Abstrakt: Sign language is an interactive language through which deaf-mute people can communicate with ordinary people. There are two ways to translate sign language: contact-based recognition and vision-based recognition. The contact-based method depends on external electrical devices, such as sensors, to identify the movements made by the person and translate them into text. The real-time motion of a person is caught via a web camera in the case of the vision-based technique and is subsequently converted to text using image processing and deep learning algorithms. In this paper, we try to compare and contrast various techniques for Sign Language Recognition and Translation. From our review, we came to a conclusion that the models that were trained on custom datasets were more accurate than the one's which were trained on datasets accumulated by researchers like WLASL. And most of the literature used CNN or a form of RNN, like GRU, LSTM and Transformers. From our study, we found that LSTM outperformed all the models with an average accuracy of 85.4% when data augmentation is utilized. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index