The THUMOS Challenge on Action Recognition for Videos 'in the Wild'

Autor: Haroon Idrees, Amir Roshan Zamir, Alexander Gorban, Mubarak Shah, Ivan Laptev, Yu-Gang Jiang, Rahul Sukthankar
Přispěvatelé: University of Central Florida [Orlando] (UCF), Stanford University, Fudan University [Shanghai], Research at Google, Models of visual object recognition and scene understanding (WILLOW), Inria de Paris, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Département d'informatique - ENS Paris (DI-ENS), Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-École normale supérieure - Paris (ENS Paris), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Paris (ENS Paris), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL), Département d'informatique de l'École normale supérieure (DI-ENS), École normale supérieure - Paris (ENS Paris), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Paris (ENS Paris), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Inria de Paris, Institut National de Recherche en Informatique et en Automatique (Inria), Département d'informatique - ENS Paris (DI-ENS), École normale supérieure - Paris (ENS-PSL), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Paris (ENS-PSL)
Jazyk: angličtina
Rok vydání: 2016
Předmět:
FOS: Computer and information sciences
Computer science
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
02 engineering and technology
Machine learning
computer.software_genre
Benchmark
Task (project management)
Annotation
Empirical research
Action Localization
0202 electrical engineering
electronic engineering
information engineering

[INFO]Computer Science [cs]
Thumos
Thesaurus (information retrieval)
Data collection
business.industry
[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]
020206 networking & telecommunications
Untrimmed Videos
Action Detection
UCF101
Action (philosophy)
Action Recognition
THUMOS
Signal Processing
Benchmark (computing)
020201 artificial intelligence & image processing
Computer Vision and Pattern Recognition
Artificial intelligence
business
computer
Software
Dataset
Zdroj: Computer Vision and Image Understanding
Computer Vision and Image Understanding, Elsevier, 2016, ⟨10.1016/j.cviu.2016.10.018⟩
Computer Vision and Image Understanding, 2016, ⟨10.1016/j.cviu.2016.10.018⟩
ISSN: 1077-3142
1090-235X
DOI: 10.1016/j.cviu.2016.10.018⟩
Popis: Automatically recognizing and localizing wide ranges of human actions has crucial importance for video understanding. Towards this goal, the THUMOS challenge was introduced in 2013 to serve as a benchmark for action recognition. Until then, video action recognition, including THUMOS challenge, had focused primarily on the classification of pre-segmented (i.e., trimmed) videos, which is an artificial task. In THUMOS 2014, we elevated action recognition to a more practical level by introducing temporally untrimmed videos. These also include `background videos' which share similar scenes and backgrounds as action videos, but are devoid of the specific actions. The three editions of the challenge organized in 2013--2015 have made THUMOS a common benchmark for action classification and detection and the annual challenge is widely attended by teams from around the world. In this paper we describe the THUMOS benchmark in detail and give an overview of data collection and annotation procedures. We present the evaluation protocols used to quantify results in the two THUMOS tasks of action classification and temporal detection. We also present results of submissions to the THUMOS 2015 challenge and review the participating approaches. Additionally, we include a comprehensive empirical study evaluating the differences in action recognition between trimmed and untrimmed videos, and how well methods trained on trimmed videos generalize to untrimmed videos. We conclude by proposing several directions and improvements for future THUMOS challenges.
Preprint submitted to Computer Vision and Image Understanding
Databáze: OpenAIRE