NMM-HRI: Natural Multi-modal Human-Robot Interaction with Voice and Deictic Posture via Large Language Model
Autor: | Lai, Yuzhi, Yuan, Shenghai, Nassar, Youssef, Fan, Mingyu, Gopal, Atmaraaj, Yorita, Arihiro, Kubota, Naoyuki, Rätsch, Matthias |
---|---|
Rok vydání: | 2025 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Translating human intent into robot commands is crucial for the future of service robots in an aging society. Existing Human-Robot Interaction (HRI) systems relying on gestures or verbal commands are impractical for the elderly due to difficulties with complex syntax or sign language. To address the challenge, this paper introduces a multi-modal interaction framework that combines voice and deictic posture information to create a more natural HRI system. The visual cues are first processed by the object detection model to gain a global understanding of the environment, and then bounding boxes are estimated based on depth information. By using a large language model (LLM) with voice-to-text commands and temporally aligned selected bounding boxes, robot action sequences can be generated, while key control syntax constraints are applied to avoid potential LLM hallucination issues. The system is evaluated on real-world tasks with varying levels of complexity using a Universal Robots UR3e manipulator. Our method demonstrates significantly better performance in HRI in terms of accuracy and robustness. To benefit the research community and the general public, we will make our code and design open-source. Comment: Submitted into RAM |
Databáze: | arXiv |
Externí odkaz: |