Zobrazeno 1 - 10
of 30
pro vyhledávání: '"Pieter Wolfert"'
Publikováno v:
Applied Sciences, Vol 14, Iss 4, p 1460 (2024)
This paper compares three methods for evaluating computer-generated motion behaviour for animated characters: two commonly used direct rating methods and a newly designed questionnaire. The questionnaire is specifically designed to measure the human-
Externí odkaz:
https://doaj.org/article/ed3d061102ef434cb2f9492db9f5f8ca
Autor:
Saya Amioka, Ruben Janssens, Pieter Wolfert, Qiaoqiao Ren, Maria Jose Pinto Bernal, Tony Belpaeme
Publikováno v:
Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction.
Publikováno v:
Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction.
Autor:
Pieter Wolfert, Taras Kucherenko, Carla Viegas, Zerrin Yumak, Youngwoo Yoon, Gustav Eje Henter
Publikováno v:
INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION.
Autor:
Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov, Mihail Tsakov, Gustav Eje Henter
Publikováno v:
INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION.
This paper reports on the second GENEA Challenge to benchmark data-driven automatic co-speech gesture generation. Participating teams used the same speech and motion dataset to build gesture-generation systems. Motion generated by all these systems w
Publikováno v:
PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22)
Visually situated language interaction is an important challenge in multi-modal Human-Robot Interaction (HRI). In this context we present a data-driven method to generate situated conversation starters based on visual context. We take visual data abo
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::513d328801135e8835ab800ef688948f
https://hdl.handle.net/1854/LU-8747513
https://hdl.handle.net/1854/LU-8747513
Autor:
Zerrin Yumak, Taras Kucherenko, Pieter Wolfert, Gustav Eje Henter, Patrik Jonell, Youngwoo Yoon
Publikováno v:
ICMI
Embodied agents benefit from using non-verbal behavior when communicating with humans. Despite several decades of non-verbal behavior-generation research, there is currently no well-developed benchmarking culture in the field. For example, most resea
Publikováno v:
HRI (Companion)
HRI '21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 94-98. New York, NY : Association for Computing Machinery (ACM)
STARTPAGE=94;ENDPAGE=98;TITLE=HRI '21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction
HRI '21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp. 94-98
HRI '21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 94-98. New York, NY : Association for Computing Machinery (ACM)
STARTPAGE=94;ENDPAGE=98;TITLE=HRI '21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction
HRI '21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp. 94-98
Item does not contain fulltext Eye behaviour is one of the main modalities used to regulate face-to-face conversation. Gaze aversion and mutual gaze, for example, serve to signal cognitive load, interest or turns during a conversation. While eye blin
Publikováno v:
IUI '21: 26th International Conference on Intelligent User Interfaces
IUI
IUI
Co-speech gestures, gestures that accompany speech, play an important role in human communication. Automatic co-speech gesture generation is thus a key enabling technology for embodied conversational agents (ECAs), since humans expect ECAs to be capa
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::256b9d0e6174d344390aff63af42fa45
https://hdl.handle.net/1854/LU-8699782
https://hdl.handle.net/1854/LU-8699782
Publikováno v:
ICMI
ICMI '21 : Proceedings of the 2021 International Conference on Multimodal Interaction
ICMI '21 : Proceedings of the 2021 International Conference on Multimodal Interaction
In many research areas, for example motion and gesture generation, objective measures alone do not provide an accurate impression of key stimulus traits such as perceived quality or appropriateness. The gold standard is instead to evaluate these aspe
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::fd5ecfacebb4afa40c79fdc3f4a78c4a
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-309462
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-309462