Multimodal Video-to-Near-Scene Annotation
Autor: | Hua-Tsung Chen, Chien-Li Chou, Suh-Yin Lee |
---|---|
Rok vydání: | 2017 |
Předmět: |
Information retrieval
Computer science Feature extraction Search engine indexing ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION 020207 software engineering 02 engineering and technology Computer Science Applications Visualization Annotation Signal Processing Trie 0202 electrical engineering electronic engineering information engineering Media Technology Entropy (information theory) 020201 artificial intelligence & image processing Electrical and Electronic Engineering tf–idf Image retrieval |
Zdroj: | IEEE Transactions on Multimedia. 19:354-366 |
ISSN: | 1941-0077 1520-9210 |
DOI: | 10.1109/tmm.2016.2614426 |
Popis: | Traditional video annotation approaches focus on annotating keyframes/shots or whole videos with semantic keywords. However, the extraction processes of keyframes/shots might lack semantic meanings, and it is hard to use a few keywords to describe the content of a long video with multiple topics. In this work, near-scenes, which contain similar concepts, topics, or semantic meanings, are designed for better video content understanding and annotation. We propose a novel framework of hierarchical video-to-near-scene annotation not only to preserve but also to purify the semantic meanings of near-scenes. To detect near-scenes, a pattern-based prefix tree is first constructed to fast retrieve near-duplicate videos. Then, the videos containing similar near-duplicate segments and similar keywords are clustered with consideration of multimodal features including visual and textual features. To enhance the precision of near-scene detection, a pattern-to-intensity-mark (PIM) method is proposed to perform precise frame-level near-duplicate segment alignment. For each near-scene, a video-to-concept distribution model is designed to analyze the representativeness of keywords and discriminations of clusters by the proposed potential term frequency and inverse document frequency and entropy. Tags are ranked according to video-to-concept distribution scores, and the tags with the highest scores are propagated to near-scenes detected. Extensive experiments demonstrate that the proposed PIM outperforms state-of-the-art approaches compared in terms of quality segments and quality frames for near-scene detection. Furthermore, the proposed framework of hierarchical video-to-near-scene annotation can achieve high quality of near-scene annotation in terms of mean average precision. |
Databáze: | OpenAIRE |
Externí odkaz: |