Human {POSEitioning} System ({HPS}): {3D} Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors
Autor: | Gerard Pons-Moll, Torsten Sattler, Vladimir Guzov, Aymen Mir |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Orientation (computer vision) business.industry Computer science Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition Wearable computer 02 engineering and technology Tracking (particle physics) 3D pose estimation 030218 nuclear medicine & medical imaging Visualization 03 medical and health sciences 0302 clinical medicine Inertial measurement unit Pattern recognition (psychology) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Computer vision Artificial intelligence business Pose |
Zdroj: | CVPR IEEE/CVF Conference on Computer Vision and Pattern Recognition 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) |
Popis: | We introduce (HPS) Human POSEitioning System, a method to recover the full 3D pose of a human registered with a 3D scan of the surrounding environment using wearable sensors. Using IMUs attached at the body limbs and a head mounted camera looking outwards, HPS fuses camera based self-localization with IMU-based human body tracking. The former provides drift-free but noisy position and orientation estimates while the latter is accurate in the short-term but subject to drift over longer periods of time. We show that our optimization-based integration exploits the benefits of the two, resulting in pose accuracy free of drift. Furthermore, we integrate 3D scene constraints into our optimization, such as foot contact with the ground, resulting in physically plausible motion. HPS complements more common third-person-based 3D pose estimation methods. It allows capturing larger recording volumes and longer periods of motion, and could be used for VR/AR applications where humans interact with the scene without requiring direct line of sight with an external camera, or to train agents that navigate and interact with the environment based on first-person visual input, like real humans. With HPS, we recorded a dataset of humans interacting with large 3D scenes (300-1000 sq.m) consisting of 7 subjects and more than 3 hours of diverse motion. The dataset, code and video will be available on the project page: http://virtualhumans.mpi-inf.mpg.de/hps/ . Comment: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) |
Databáze: | OpenAIRE |
Externí odkaz: |