The DAily Home LIfe Activity Dataset: A High Semantic Activity Dataset for Online Recognition

Autor: Astrid Orcesi, Geoffrey Vaquette, Laurent Lucat, Catherine Achard
Přispěvatelé: Département Intelligence Ambiante et Systèmes Interactifs (DIASI), Laboratoire d'Intégration des Systèmes et des Technologies (LIST), Direction de Recherche Technologique (CEA) (DRT (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Direction de Recherche Technologique (CEA) (DRT (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay, Institut des Systèmes Intelligents et de Robotique (ISIR), Université Pierre et Marie Curie - Paris 6 (UPMC)-Centre National de la Recherche Scientifique (CNRS), Laboratoire d'Intégration des Systèmes et des Technologies (LIST (CEA))
Rok vydání: 2017
Předmět:
Zdroj: FG
2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017)
2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), May 2017, Washington, France. pp.497-504, ⟨10.1109/FG.2017.67⟩
2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), May 2017, Washington, DC, United States. pp.497-504, ⟨10.1109/FG.2017.67⟩
Popis: Conference of 12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017 ; Conference Date: 30 May 2017 Through 3 June 2017; Conference Code:128713; International audience; In this article, we introduce the DAily Home LIfe Activity (DAHLIA) Dataset, a new dataset adapted to the context of smart-home or video-assistance. Videos were recorded in realistic conditions, with 3 KinectTMv2 sensors located as they would be in a real context. The very long-range activities were performed in an unconstrained way (participants received few instructions), and in a continuous (untrimmed) sequence, resulting in long videos (39 min in average per subject). Contrary to previously published databases, in which labeled actions are very short and with low-semantic level, this new database focuses on high-level semantic activities such as 'Preparing lunch' or 'House Working'. As a baseline, we evaluated several metrics on three different algorithms designed for online action recognition or detection.
Databáze: OpenAIRE