Autor: |
Gummadi. Varshith, K.V. Pavan Srikar, Tathagat Banerjee, S. Ashvith Reddy, Rithika Reddy Koripally, Krishna Sai Biradar |
Rok vydání: |
2021 |
Předmět: |
|
Zdroj: |
2021 International Conference on Innovative Practices in Technology and Management (ICIPTM). |
Popis: |
Speech Impairment and conversion of sign language to human re-engineered audio signals is something computer science has always been interested in. However, the architectural robustness and extraction of features on a very insignificant area of change have posed decade long problems to achieve this idea. The paper proposes a Convolutional Neural network based on a deep belief model on Data imagery collected by leap motion controllers on hand sign recognition. The database is composed of 10 different hand-gestures that were performed by 10 different subjects (5 men and 5 women) which is presented, composed by a set of near-infrared images acquired by the Leap Motion sensor. The paper tries to achieve high accuracy on the pertaining training set inorder to create and form a robust model. It embraces the first step towards image understanding of human signs and aid specially-abled people. We have implemented and tested the algorithm for 2000 images each class. The paper achieves the accuracy and precision of 99.4% and 99.68% respectively. The implications of the study intend to enhance understanding of infrared imagery for small areas of localization feature detection and intend to help the idea of human audio re-engineering a resurgence by using the same. |
Databáze: |
OpenAIRE |
Externí odkaz: |
|