Abstrakt: |
As elaborate human-robot interaction capabilities continue to develop, humans will increasingly be in proximity with robots, and the management of the ongoing control in case of breakdown becomes increasingly important: taking care of what happens when cooperation goes wrong. The current research addresses three categories of breakdowns where cooperation can go wrong. In the first category, the human detects some type of problem and generates a self-issued stop signal, with a physical palm up posture. In the second category, the human becomes distracted, and physically changes his orientation away from the shared space of cooperation. In the final category that we investigate, the human becomes physically close to the robot such that safety limits are reached and detected by the robot. In each of these three cases, the robot cognitive system detects the failure via the perception of distinct physical states from motion capture: the hand up posture; change in head orientation; and physical distance reaching a minimum threshold. In each case the robot immediately halts the current action. Then, the system should recover appropriately. Each error type returns a specific code, allowing the Supervisor system to handle the specific type of error. Our cognitive system allows the robot to learn composite actions, as a sequence of atomic actions. These composite actions can then be composed into higher level plans. When a plan fails at the level of a composite action, the recovery method is not trivial: should recovery take place at the level of the composite action, or the actual atomic action which physically failed? As the best recovery may depend on the physical context, we expand the plan into atomic actions, and recover at this level, allowing the user to specify whether the action should be skipped or retried. We demonstrate that this system allows graceful recovery from three principal categories of interaction breakdown, and provides an invaluable mechanism for preserving the integrity of cooperative HRI. [ABSTRACT FROM PUBLISHER] |