Popis: |
Sign language is the most prevalent form of communication among people with speech and hearing disabilities. The most widely used types of sign language involve the creation of static or dynamic gestures using hand(s). Among many sign languages, Bengali Sign Language (BdSL) is one of the most complicated sign languages to learn and comprehend because of its enormous alphabet, vocabulary, and variation in expression techniques. Existing solutions include learning BdSL or hiring an interpreter. Besides, BdSL interpreter support is hard to come by and expensive (if not voluntary). Disabled people might find it more comfortable to converse with generals implementing machine translation of sign language. Deep learning that mimics the human brain, a subset of the machine learning domain, seems to be a viable solution. For the hearing impaired and non verbal community, computer vision, in particular, may hold the key to finding a solution. Therefore, we have created a novel model, KUNet (“Khulna University Network” a CNN based model), a classification framework optimized by the genetic algorithm (GA), has been proposed to classify BdSL. This model and the dataset contribute to creating a BdSL machine translator. GA-optimized KUNet acquired an accuracy of 99.11% on KU-BdSL. After training the model on KU-BdSL, we demonstrated a comparison of the model with state-of-the-art studies and interpreted the black-box nature of the model using explainable AI (XAI). Additionally, we have found that our model outperformed several well-known models trained on the KU-BdSL dataset. This study will benefit the hearing impaired and non verbal community by allowing them to communicate effortlessly and minimizing their hardship. |