Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Alfarabi Imashev"'
Publikováno v:
PLoS ONE, Vol 15, Iss 6, p e0233731 (2020)
Facial expressions in sign languages are used to express grammatical functions, such as question marking, but can also be used to express emotions (either the signer's own or in constructed action contexts). Emotions and grammatical functions can uti
Externí odkaz:
https://doaj.org/article/1b3736a0b1ca4fda8552f957af522d1c
Publikováno v:
Proceedings of the 10th International Conference on Human-Agent Interaction.
Autor:
Medet Mukushev, Aidyn Ubingazhibov, Aigerim Kydyrbekova, Alfarabi Imashev, Vadim Kimmelman, Anara Sandygulova
Publikováno v:
e0273649
PLOS ONE
PLOS ONE
This paper presents a new large-scale signer independent dataset for Kazakh-Russian Sign Language (KRSL) for the purposes of Sign Language Processing. We envision it to serve as a new benchmark dataset for performance evaluations of Continuous Sign L
Publikováno v:
CoNLL
The paper presents the first dataset that aims to serve interdisciplinary purposes for the utility of computer vision community and sign language linguistics. To date, a majority of Sign Language Recognition (SLR) approaches focus on recognising sign
Publikováno v:
e0233731
PLOS ONE
PLoS ONE, Vol 15, Iss 6, p e0233731 (2020)
PLoS ONE
PLOS ONE
PLoS ONE, Vol 15, Iss 6, p e0233731 (2020)
PLoS ONE
Facial expressions in sign languages are used to express grammatical functions, such as question marking, but can also be used to express emotions (either the signer’s own or in constructed action contexts). Emotions and grammatical functions can u
Autor:
Alfarabi Imashev
Publikováno v:
2017 IEEE 11th International Conference on Application of Information and Communication Technologies (AICT).
this article is aimed at illustrating the approach of detecting and recognizing Kazakh Sign Language static gestures without using expensive depth cameras or sensor gloves. For this research a simple RGB mono camera had been used. The main distinguis
Autor:
Alfarabi Imashev, Shynggys Islam, Kairat Aitpayev, Nazgul Tazhigaliyeva, Nazerke Kalidolda, Anara Sandygulova, German Ignacio Parisi
Publikováno v:
ICRA
Deaf-mute communities around the world experience a need in effective human-robot interaction system that would act as an interpreter in public places such as banks, hospitals, or police stations. The focus of this work is to address the challenges p
Publikováno v:
2016 IEEE 10th International Conference on Application of Information and Communication Technologies (AICT).
The goal of this work is to automatically annotate manual and some non-manual features of sign language in video. To achieve this we examine two techniques one using depth camera Microsoft Kinect 2.0 and second using simple RGB mono camera. In this w