Forming Real-World Human-Robot Cooperation for Tasks With General Goal
Autor: | Tao, Lingfeng, Bowman, Michael, Zhang, Jiucai, Zhang, Xiaoli |
---|---|
Rok vydání: | 2020 |
Předmět: | |
Zdroj: | in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 762-769, April 2022 |
Druh dokumentu: | Working Paper |
DOI: | 10.1109/LRA.2021.3133588 |
Popis: | In human-robot cooperation, the robot cooperates with humans to accomplish the task together. Existing approaches assume the human has a specific goal during the cooperation, and the robot infers and acts toward it. However, in real-world environments, a human usually only has a general goal (e.g., general direction or area in motion planning) at the beginning of the cooperation, which needs to be clarified to a specific goal (i.e., an exact position) during cooperation. The specification process is interactive and dynamic, which depends on the environment and the partner's behavior. The robot that does not consider the goal specification process may cause frustration to the human partner, elongate the time to come to an agreement, and compromise team performance. This work presents the Evolutionary Value Learning approach to model the dynamics of the goal specification process with State-based Multivariate Bayesian Inference and goal specificity-related features. This model enables the robot to enhance the process of the human's goal specification actively and find a cooperative policy in a Deep Reinforcement Learning manner. Our method outperforms existing methods with faster goal specification processes and better team performance in a dynamic ball balancing task with real human subjects. Comment: Published on RAL |
Databáze: | arXiv |
Externí odkaz: |