Autor: |
Baktash, Abdullah Qassim, Mohammed, Saleem Latteef, Daeef, Ammar Yahya |
Předmět: |
|
Zdroj: |
AIP Conference Proceedings; 10/25/2022, Vol. 2398 Issue 1, p1-8, 8p |
Abstrakt: |
As a result of different physiological and accidentals causes for the inability of a man to speak. It becomes necessary to develop an efficient user-friendly technique to translate visual sign language into speech. In this paper, a visual-based translator is proposed as a hand gesture classification model. Region of interest (ROI) and hand segmentation is performed using a mask Region-based Convolutional Neural Network (R-CNN). The classification model is trained using a large number of gestures dataset using Convolutional Neural Network (CNN) deep learning and hosted in a web server. The system has realized a high accuracy of 99.79% and a loss of 0.0096. The trained model is loaded from the server to an internet browser using a special JavaScript library. The hand gesture is captured using a smart device camera and applied to the model to provide a real-time prediction. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|