XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera
Autor: | Weipeng Xu, Helge Rhodin, Franziska Mueller, Pascal Fua, Christian Theobalt, Mohamed Elgharib, Dushyant Mehta, Gerard Pons-Moll, Oleksandr Sotnychenko, Hans-Peter Seidel |
---|---|
Předmět: |
FOS: Computer and information sciences
human body pose rgb Computer science Computer Vision and Pattern Recognition (cs.CV) monocular Computer Science - Computer Vision and Pattern Recognition ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION real-time 02 engineering and technology shape pose estimation Convolutional neural network Motion capture Computer Science - Graphics 0202 electrical engineering electronic engineering information engineering motion capture Computer vision Pose Monocular Artificial neural network business.industry 020207 software engineering Computer Graphics and Computer-Aided Design Graphics (cs.GR) Range (mathematics) RGB color model 020201 artificial intelligence & image processing Artificial intelligence business |
Popis: | We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates successfully in generic scenes which may contain occlusions by objects and by other people. Our method operates in subsequent stages. The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals.We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. In the second stage, a fully connected neural network turns the possibly partial (on account of occlusion) 2Dpose and 3Dpose features for each subject into a complete 3Dpose estimate per individual. The third stage applies space-time skeletal model fitting to the predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose, and enforce temporal coherence. Our method returns the full skeletal pose in joint angles for each subject. This is a further key distinction from previous work that do not produce joint angle results of a coherent skeleton in real time for multi-person scenes. The proposed system runs on consumer hardware at a previously unseen speed of more than 30 fps given 512x320 images as input while achieving state-of-the-art accuracy, which we will demonstrate on a range of challenging real-world scenes. Comment: To appear in ACM Transactions on Graphics (SIGGRAPH) 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |