Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Alexandr Axyonov"'
Autor:
Alexey Kashevnik, Igor Lashkov, Alexandr Axyonov, Denis Ivanko, Dmitry Ryumin, Artem Kolchin, Alexey Karpov
Publikováno v:
IEEE Access, Vol 9, Pp 34986-35003 (2021)
This paper introduces a new methodology aimed at comfort for the driver in-the-wild multimodal corpus creation for audio-visual speech recognition in driver monitoring systems. The presented methodology is universal and can be used for corpus recordi
Externí odkaz:
https://doaj.org/article/60344f4f44a74ff993eb7328cb0de551
Autor:
Denis Ivanko, Alexey Kashevnik, Dmitry Ryumin, Andrey Kitenko, Alexandr Axyonov, Igor Lashkov, Alexey Karpov
Publikováno v:
INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION.
Publikováno v:
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XLII-2-W12, Pp 179-183 (2019)
In this paper, we propose an approach to detect and recognize 3D one-handed gestures for human-machine interaction. The logical structure of the modules of the system for recording a gestural database is described. The logical structure of the databa
Autor:
Iosif Mporas, Ildar Kagirov, Alexey Karpov, Anton Saveliev, Dmitry Ryumin, Alexandr Axyonov, Irina S. Kipyatkova, Milos Zelezny, Nikita Pavlyuk
Publikováno v:
Electronics
Volume 9
Issue 12
Electronics, Vol 9, Iss 2093, p 2093 (2020)
Volume 9
Issue 12
Electronics, Vol 9, Iss 2093, p 2093 (2020)
This paper presents the research and development of the prototype of the assistive mobile information robot (AMIR). The main features of the presented prototype are voice and gesture-based interfaces with Russian speech and sign language recognition
Publikováno v:
Proceedings of 14th International Conference on Electromechanics and Robotics “Zavalishin's Readings” ISBN: 9789811392665
Automatic lip-reading (ALR) is a challenging task and a significant amount of research has been devoted to this topic in recent years. However, continuous Russian speech recognition still remains a not well-investigated area. In this paper, we presen
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::ac93de8e80d3477231e14ba75b3228ca
https://doi.org/10.1007/978-981-13-9267-2_39
https://doi.org/10.1007/978-981-13-9267-2_39
Publikováno v:
PerCom Workshops
The paper presents a concept of a smart robotic trolley for supermarkets with a multimodal user interface, including sign language and acoustic speech recognition, and equipped with a touchscreen. Considerable progress in hand gesture recognition and
Publikováno v:
Speech and Computer ISBN: 9783030260606
SPECOM
SPECOM
The paper presents an approach to the multimodal recognition of dynamic and static gestures of Russian sign language through 3D convolutional and LSTM neural networks. A set of data in color format and a depth map, consisting of 48 one-handed gesture
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::ae4ad1207b97848b8ec662e1e350b6b5
https://doi.org/10.1007/978-3-030-26061-3_20
https://doi.org/10.1007/978-3-030-26061-3_20