Multi-modal anchoring for human–robot interaction
Autor: | Gerhard Sagerer, Marcus Kleinehagenbrock, Jannik Fritsch, Thomas Plötz, Sebastian Lang, Gernot A. Fink |
---|---|
Rok vydání: | 2003 |
Předmět: |
multi-modal person tracking
business.industry Computer science General Mathematics Association (object-oriented programming) Anchoring Mobile robot Human–robot interaction Computer Science Applications anchoring human-robot interaction Range (mathematics) Modal Control and Systems Engineering Component (UML) Face (geometry) Computer vision Artificial intelligence business Software |
Zdroj: | Robotics and Autonomous Systems. 43:133-147 |
ISSN: | 0921-8890 |
DOI: | 10.1016/s0921-8890(02)00355-x |
Popis: | This paper presents a hybrid approach for tracking humans with a mobile robot that integrates face and leg detection results extracted from image and laser range data, respectively. The different percepts are linked to their symbolic counterparts legs and face by anchors as defined by Coradeschi and Saffiotti [Anchoring symbols to sensor data: preliminary report, in: Proceedings of the Conference of the American Association for Artificial Intelligence, 2000, pp. 129-135]. In order to anchor the composite object person we extend the anchoring framework to combine different component anchors belonging to the same person. This allows to deal with perceptual algorithms having different spatio-temporal properties and provides a structured way for integrating anchor data from multiple modalities. An evaluation demonstrates the performance of our approach. (C) 2003 Elsevier Science B.V. All rights reserved. |
Databáze: | OpenAIRE |
Externí odkaz: |