Learning to solve sequential physical reasoning problems from a scene image
Autor: | Danny Driess, Jung-Su Ha, Marc Toussaint |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
sequential manipulation
Applied Mathematics Mechanical Engineering planar pushing deep learning Artificial Intelligence Modeling and Simulation task and motion planning logic geometric programming physical reasoning deep Q-learning Electrical and Electronic Engineering ddc:620 offline reinforcement learning 620 Ingenieurwissenschaften und zugeordnete T��tigkeiten Software |
DOI: | 10.14279/depositonce-14847 |
Popis: | In this article, we propose deep visual reasoning, which is a convolutional recurrent neural network that predicts discrete action sequences from an initial scene image for sequential manipulation problems that arise, for example, in task and motion planning (TAMP). Typical TAMP problems are formalized by combining reasoning on a symbolic, discrete level (e.g., first-order logic) with continuous motion planning such as nonlinear trajectory optimization. The action sequences represent the discrete decisions on a symbolic level, which, in turn, parameterize a nonlinear trajectory optimization problem. Owing to the great combinatorial complexity of possible discrete action sequences, a large number of optimization/motion planning problems have to be solved to find a solution, which limits the scalability of these approaches. To circumvent this combinatorial complexity, we introduce deep visual reasoning: based on a segmented initial image of the scene, a neural network directly predicts promising discrete action sequences such that ideally only one motion planning problem has to be solved to find a solution to the overall TAMP problem. Our method generalizes to scenes with many and varying numbers of objects, although being trained on only two objects at a time. This is possible by encoding the objects of the scene and the goal in (segmented) images as input to the neural network, instead of a fixed feature vector. We show that the framework can not only handle kinematic problems such as pick-and-place (as typical in TAMP), but also tool-use scenarios for planar pushing under quasi-static dynamic models. Here, the image-based representation enables generalization to other shapes than during training. Results show runtime improvements of several orders of magnitudes by, in many cases, removing the need to search over the discrete action sequences. |
Databáze: | OpenAIRE |
Externí odkaz: |