Grounding natural language instructions to semantic goal representations for abstraction and generalization
Autor: | Edward Williams, Lawson L. S. Wong, Siddharth Karamcheti, Stefanie Tellex, Nakul Gopalan, Dilip Arumugam, Mina Rhee |
---|---|
Rok vydání: | 2018 |
Předmět: |
Structure (mathematical logic)
0209 industrial biotechnology Theoretical computer science Computer science Generalization Realization (linguistics) 02 engineering and technology 020901 industrial engineering & automation Artificial Intelligence 0202 electrical engineering electronic engineering information engineering Logical form Selection (linguistics) 020201 artificial intelligence & image processing Representation (mathematics) Natural language Abstraction (linguistics) |
Zdroj: | Autonomous Robots. 43:449-468 |
ISSN: | 1573-7527 0929-5593 |
DOI: | 10.1007/s10514-018-9792-8 |
Popis: | Language grounding is broadly defined as the problem of mapping natural language instructions to robot behavior. To truly be effective, these language grounding systems must be accurate in their selection of behavior, efficient in the robot’s realization of that selected behavior, and capable of generalizing beyond commands and environment configurations only seen at training time. One choice that is crucial to the success of a language grounding model is the choice of representation used to capture the objective specified by the input command. Prior work has been varied in its use of explicit goal representations, with some approaches lacking a representation altogether, resulting in models that infer whole sequences of robot actions, while other approaches map to carefully constructed logical form representations. While many of the models in either category are reasonably accurate, they fail to offer either efficient execution or any generalization without requiring a large amount of manual specification. In this work, we take a first step towards language grounding models that excel across accuracy, efficiency, and generalization through the construction of simple, semantic goal representations within Markov decision processes. We propose two related semantic goal representations that take advantage of the hierarchical structure of tasks and the compositional nature of language respectively, and present multiple grounding models for each. We validate these ideas empirically with results collected from following text instructions within a simulated mobile-manipulator domain, as well as demonstrations of a physical robot responding to spoken instructions in real time. Our grounding models tie abstraction in language commands to a hierarchical planner for the robot’s execution, enabling a response-time speed-up of several orders of magnitude over baseline planners within sufficiently large domains. Concurrently, our grounding models for generalization infer elements of the semantic representation that are subsequently combined to form a complete goal description, enabling the interpretation of commands involving novel combinations never seen during training. Taken together, our results show that the design of semantic goal representation has powerful implications for the accuracy, efficiency, and generalization capabilities of language grounding models. |
Databáze: | OpenAIRE |
Externí odkaz: |