Abstrakt: |
The Arabic Sign Language is the way of communicating between people who suffer from hearing impairment in Arabic countries. According to the reports of the World Health Organization, the number of people suffering from hearing impairment in the Arab region was increased, and that produces a gap between society and hearing-impaired people in education and learning, work, using social media, communicating with others, etc. So, automatic sign language interpreters became a pressing necessity to reduce this gap and minimize their isolation. In this paper, a hybrid combination of modified Convolution Neural networks and machine learning classifiers is suggested. This proposal recognizes both the sign language alphabets and the symbolic sign language (words). A new dataset was created from the current dataset images and images that stemmed from video or captured by a camera. Images are preprocessed and then the features extracted by using Linear Discriminant Analysis are used as input to one dimension convolution neural network that uses one of the three machine learning classifiers (Naive Bayes, Decision Trees, and Random Forest) instead of a neural network. The proposal algorithm performance was tested with various challenges and gives the very promised accuracy reached up to 99.9% for recognition of alphabets and words, and the possibility to work efficiently in real-time. [ABSTRACT FROM AUTHOR] |