Temporal segmentation and keyframe selection methods for user-generated video search-based annotation
Autor: | Fernando Diaz-de-Maria, Tomás Martínez-Cortés, Iván González-Díaz, Ascensión Gallardo-Antolín |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2015 |
Předmět: |
Motion analysis
Telecomunicaciones Computer science business.industry General Engineering ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Hierarchical hidden Markov model Video annotation Keyframe selection Computer Science Applications Artificial Intelligence User Generated Video Video temporal segmentation Camera motion analysis Segmentation Computer vision Relevance (information retrieval) Artificial intelligence Zoom Hidden Markov model business Selection (genetic algorithm) |
Zdroj: | e-Archivo. Repositorio Institucional de la Universidad Carlos III de Madrid instname |
Popis: | Video temporal segmentation and keyframe selection approaches for User Generated Video (UGV).Hierarchical Hidden Markov Models applied to camera motion analysis to detect motion patterns and temporally segment the video.Evaluation of the influence of camera motion over the performance of automatic UGV annotation systems.Two datasets for User Generated Video have been developed and made publicly available. In this paper we propose a temporal segmentation and a keyframe selection method for User-Generated Video (UGV). Since UGV is rarely structured in shots and usually user's interest are revealed through camera movements, a UGV temporal segmentation system has been proposed that generates a video partition based on a camera motion classification. Motion-related mid-level features have been suggested to feed a Hierarchical Hidden Markov Model (HHMM) that produces a user-meaningful UGV temporal segmentation. Moreover, a keyframe selection method has been proposed that picks a keyframe for fixed-content camera motion patterns such as zoom, still, or shake and a set of keyframes for varying-content translation patterns.The proposed video segmentation approach has been compared to a state-of-the-art algorithm, achieving 8% performance improvement in a segmentation-based evaluation. Furthermore, a complete search-based UGV annotation system has been developed to assess the influence of the proposed algorithms on an end-user task. To that purpose, two UGV datasets have been developed and made available online. Specifically, the relevance of the considered camera motion types has been analyzed for these two datasets, and some guidelines are given to achieve the desired performance-complexity tradeoff. The keyframe selection algorithm for varying-content translation patterns has also been assessed, revealing a notable contribution to the performance of the global UGV annotation system. Finally, it has been shown that the UGV segmentation algorithm also produces improved annotation results with respect to a fixed-rate keyframe selection baseline or a traditional method relying on frame-level visual features. |
Databáze: | OpenAIRE |
Externí odkaz: |