Neural network-based method for visual recognition of driver’s voice commands using attention mechanism
Autor: | Alexandr A. Axyonov, Elena V. Ryumina, Dmitry A. Ryumin, Denis V. Ivanko, Alexey A. Karpov |
---|---|
Jazyk: | English<br />Russian |
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | Naučno-tehničeskij Vestnik Informacionnyh Tehnologij, Mehaniki i Optiki, Vol 23, Iss 4, Pp 767-775 (2023) |
Druh dokumentu: | article |
ISSN: | 2226-1494 2500-0373 |
DOI: | 10.17586/2226-1494-2023-23-4-767-775 |
Popis: | Visual speech recognition or automated lip-reading systems actively apply to speech-to-text translation. Video data proves to be useful in multimodal speech recognition systems, particularly when using acoustic data is difficult or not available at all. The main purpose of this study is to improve driver command recognition by analyzing visual information to reduce touch interaction with various vehicle systems (multimedia and navigation systems, phone calls, etc.) while driving. We propose a method of automated lip-reading the driver’s speech while driving based on a deep neural network of 3DResNet18 architecture. Using neural network architecture with bi-directional LSTM model and attention mechanism allows achieving higher recognition accuracy with a slight decrease in performance. Two different variants of neural network architectures for visual speech recognition are proposed and investigated. When using the first neural network architecture, the result of voice recognition of the driver was 77.68 %, which was lower by 5.78 % than when using the second one the accuracy of which was 83.46 %. Performance of the system which is determined by a real-time indicator RTF in the case of the first neural network architecture is equal to 0.076, and the second — RTF is 0.183 which is more than two times higher. The proposed method was tested on the data of multimodal corpus RUSAVIC recorded in the car. Results of the study can be used in systems of audio-visual speech recognition which is recommended in high noise conditions, for example, when driving a vehicle. In addition, the analysis performed allows us to choose the optimal neural network model of visual speech recognition for subsequent incorporation into the assistive system based on a mobile device. |
Databáze: | Directory of Open Access Journals |
Externí odkaz: |