Pose-invariant kinematic features for action recognition

Autor: Wei-Yun Yau, Nadia Magnenat Thalmann, Manoj Ramanathan, Eam Khwang Teoh
Přispěvatelé: School of Electrical and Electronic Engineering, 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Institute for Media Innovation (IMI)
Rok vydání: 2017
Předmět:
Zdroj: APSIPA
DOI: 10.1109/apsipa.2017.8282038
Popis: Recognition of actions from videos is a difficult task due to several factors like dynamic backgrounds, occlusion, pose-variations observed. To tackle the pose variation problem, we propose a simple method based on a novel set of pose-invariant kinematic features which are encoded in a human body centric space. The proposed framework begins with detection of neck point, which will serve as a origin of body centric space. We propose a deep learning based classifier to detect neck point based on the output of fully connected network layer. With the help of the detected neck, propagation mechanism is proposed to divide the foreground region into head, torso and leg grids. The motion observed in each of these body part grids are represented using a set of pose-invariant kinematic features. These features represent motion of foreground or body region with respect to the detected neck point's motion and encoded based on view in a human body centric space. Based on these features, poseinvariant action recognition can be achieved. Due to the body centric space is used, non-upright human posture actions can also be handled easily. To test its effectiveness in non-upright human postures in actions, a new dataset is introduced with 8 non-upright actions performed by 35 subjects in 3 different views. Experiments have been conducted on benchmark and newly proposed non-upright action dataset to identify limitations and get insights on the proposed framework. NRF (Natl Research Foundation, S’pore) ASTAR (Agency for Sci., Tech. and Research, S’pore) Accepted version
Databáze: OpenAIRE