VideoMem: Constructing, Analyzing, Predicting Short-Term and Long-Term Video Memorability
Autor: | Martin Engilberge, Claire-Hélène Demarty, Ngoc Q. K. Duong, Romain Cohendet |
---|---|
Přispěvatelé: | Duong, Ngoc |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
[INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI] Artificial neural network business.industry Computer science Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition Contrast (statistics) Machine learning computer.software_genre [STAT.ML] Statistics [stat]/Machine Learning [stat.ML] Memorization Multimedia (cs.MM) Visualization Term (time) Ranking (information retrieval) [SCCO.COMP] Cognitive science/Computer science Task analysis Artificial intelligence business Scale (map) computer Computer Science - Multimedia |
Zdroj: | ICCV |
Popis: | Humans share a strong tendency to memorize/forget some of the visual information they encounter. This paper focuses on providing computational models for the prediction of the intrinsic memorability of visual content. To address this new challenge, we introduce a large scale dataset (VideoMem) composed of 10,000 videos annotated with memorability scores. In contrast to previous work on image memorability -- where memorability was measured a few minutes after memorization -- memory performance is measured twice: a few minutes after memorization and again 24-72 hours later. Hence, the dataset comes with short-term and long-term memorability annotations. After an in-depth analysis of the dataset, we investigate several deep neural network based models for the prediction of video memorability. Our best model using a ranking loss achieves a Spearman's rank correlation of 0.494 for short-term memorability prediction, while our proposed model with attention mechanism provides insights of what makes a content memorable. The VideoMem dataset with pre-extracted features is publicly available. |
Databáze: | OpenAIRE |
Externí odkaz: |