Popis: |
Endowing artificial agents with the ability of predicting the consequences of their own actions and effi- ciently planning their behaviors based on such predictions is a fundamental challenge both in artificial intelligence and robotics. A computationally practical yet powerful way to model this knowledge, referred as objects affordances, is through probabilistic dependencies between actions, objects and effects: this allows to make inferences across these dependencies, such as i) predicting the effects of an action over an object, or ii) selecting the best action from a repertoire in order to obtain a desired effect over an object. We propose a probabilistic model capable of learning the mutual interaction between objects in complex tasks that involve manipulation, where one of the objects plays an active tool role while being grasped and used (e.g., a hammer) while another item is passively acted upon (e.g., a nail). We consider visual affordances, meaning that we do not model object labels or categories; instead, we compute a set of visual features that represent geometrical properties (e.g., convexity, roundness), which allows to generalize previously- acquired knowledge to new objects. We describe an experiment in which a simulated humanoid robot learns an affordance model by autonomously exploring different actions with the objects present in a playground scenario. We report results showing that the robot is able to i) learn meaningful relation- ships between actions, tools, other objects and effects, and to ii) exploit the acquired knowledge to make predictions and take optimal decisions. |