Popis: |
The deaf communities prevalent in India are still struggling for Indian Sign Language to gain the status of a minority language. A system is required that translates Indian Sign Language to the corresponding English language excerpt. For this, the visual, as well as non-visual input of Sign Language signs, have to be processed, translated into English words, and then these words have to be put together into a grammatically correct and meaningful sentence (or sentences). The researchers have worked on processing input which can be sensor-based, image-based, with videos in their entirety, or sampling videos after fixed intervals of time to decide the trajectories of motions. The input could be of any form, i.e., a hardware system for recognizing hand movements, images, or video format. This paper focuses on state-of-the-art literature that identifies areas of interest in the non-visual inputs, image frames, and video frames to determine the features for a particular hand gesture. The literature survey also takes into account the approaches considered by researchers across different sign languages like American Sign Language, Taiwanese Sign Language, etc. which will help to develop a perspective for Indian Sign Language. This paper also reviews previous research work that has been conducted to translate a video to the English language using Natural Language Processing techniques such as the Viterbi algorithm, tokenization, part-of-speech tagging, and parsing. |