Abstrakt: |
Text, as one of humanity's most influential innovations, has played an important role in shaping our lives. Reading a text is a difficult task due to several reasons factors, such as luminosity, text orientation, writing style, and very light colors. However, the visually impaired, on the other hand, have difficulty reading a text in all of these situations. In particular, a handwritten text is more difficult to read than a digital text due to the different forms and styles of the handwriting of different writers or, sometimes, of the same writer. Therefore, they would benefit from a device or a system to help them to solve this problem and improve their quality of life. Arabic language recognition and identification is a very difficult task because of diacritics such as consonant score, tashkil, and others. In this context, we propose a recognition and identification system for Arabic Handwritten Texts with Diacritics (AHTD) based on deep learning by using the convolutional neural network. Text images are trained, tested, and validated with our Arabic Handwritten Texts with a Diacritical Dataset (AHT2D). Then, the recognized text is enhanced with augmented reality technology and produced as a 2D image. Finally, the recognized text is converted into an audio output using AR technology. Voice output and visual output are given to the visually impaired user. The experimental results show that the proposed system is robust, with an accuracy rate of 95%. [ABSTRACT FROM AUTHOR] |