Cooperating with Avatars Through Gesture, Language and Action

Autor: Gururaj Mulay, Rahul Bangar, J. Ross Beveridge, Nikhil Krishnaswamy, Dhruva Patil, Jaime Ruiz, Bruce A. Draper, James Pustejovsky, Pradyumna Narayana, Isaac Wang, Kyeongmin Rim
Rok vydání: 2018
Předmět:
Zdroj: Advances in Intelligent Systems and Computing ISBN: 9783030010539
IntelliSys (1)
DOI: 10.1007/978-3-030-01054-6_20
Popis: Advances in artificial intelligence are fundamentally changing how we relate to machines. We used to treat computers as tools, but now we expect them to be agents, and increasingly our instinct is to treat them like peers. This paper is an exploration of peer-to-peer communication between people and machines. Two ideas are central to the approach explored here: shared perception, in which people work together in a shared environment, and much of the information that passes between them is contextual and derived from perception; and visually grounded reasoning, in which actions are considered feasible if they can be visualized and/or simulated in 3D. We explore shared perception and visually grounded reasoning in the context of blocks world, which serves as a surrogate for cooperative tasks where the partners share a workspace. We begin with elicitation studies observing pairs of people working together in blocks world and noting the gestures they use. These gestures are grouped into three categories: social, deictic, and iconic gestures. We then build a prototype system in which people are paired with avatars in a simulated blocks world. We find that when participants can see but not hear each other, all three gesture types are necessary, but that when the participants can speak to each other the social and deictic gestures remain important while the iconic gestures become less so. We also find that ambiguities flip the conversational lead, in that the partner previously receiving information takes the lead in order to resolve the ambiguity.
Databáze: OpenAIRE