Deep Learning for Detecting Multiple Space-Time Action Tubes in Videos
Autor: | Michael Sapienza, Philip H. S. Torr, Fabio Cuzzolin, Suman Saha, Gurkirt Singh |
---|---|
Rok vydání: | 2016 |
Předmět: |
FOS: Computer and information sciences
Computer science business.industry Space time Deep learning Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition Optical flow 020207 software engineering Motion detection Pattern recognition 02 engineering and technology Dynamic programming Consistency (database systems) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Trimming Artificial intelligence business Energy (signal processing) |
Zdroj: | BMVC |
DOI: | 10.48550/arxiv.1608.01529 |
Popis: | In this work, we propose an approach to the spatiotemporal localisation (detection) and classification of multiple concurrent actions within temporally untrimmed videos. Our framework is composed of three stages. In stage 1, appearance and motion detection networks are employed to localise and score actions from colour images and optical flow. In stage 2, the appearance network detections are boosted by combining them with the motion detection scores, in proportion to their respective spatial overlap. In stage 3, sequences of detection boxes most likely to be associated with a single action instance, called action tubes, are constructed by solving two energy maximisation problems via dynamic programming. While in the first pass, action paths spanning the whole video are built by linking detection boxes over time using their class-specific scores and their spatial overlap, in the second pass, temporal trimming is performed by ensuring label consistency for all constituting detection boxes. We demonstrate the performance of our algorithm on the challenging UCF101, J-HMDB-21 and LIRIS-HARL datasets, achieving new state-of-the-art results across the board and significantly increasing detection speed at test time. We achieve a huge leap forward in action detection performance and report a 20% and 11% gain in mAP (mean average precision) on UCF-101 and J-HMDB-21 datasets respectively when compared to the state-of-the-art. Comment: Accepted by British Machine Vision Conference 2016 |
Databáze: | OpenAIRE |
Externí odkaz: |