Show Me What I Like: Detecting User-Specific Video Highlights Using Content-Based Multi-Head Attention
Autor: | Bhattacharya, Uttaran, Wu, Gang, Petrangeli, Stefano, Swaminathan, Viswanathan, Manocha, Dinesh |
---|---|
Rok vydání: | 2022 |
Předmět: | |
Zdroj: | In Proceedings of the 30th ACM International Conference on Multimedia, 2022, Lisboa, Portugal |
Druh dokumentu: | Working Paper |
DOI: | 10.1145/3503161.3547843 |
Popis: | We propose a method to detect individualized highlights for users on given target videos based on their preferred highlight clips marked on previous videos they have watched. Our method explicitly leverages the contents of both the preferred clips and the target videos using pre-trained features for the objects and the human activities. We design a multi-head attention mechanism to adaptively weigh the preferred clips based on their object- and human-activity-based contents, and fuse them using these weights into a single feature representation for each user. We compute similarities between these per-user feature representations and the per-frame features computed from the desired target videos to estimate the user-specific highlight clips from the target videos. We test our method on a large-scale highlight detection dataset containing the annotated highlights of individual users. Compared to current baselines, we observe an absolute improvement of 2-4% in the mean average precision of the detected highlights. We also perform extensive ablation experiments on the number of preferred highlight clips associated with each user as well as on the object- and human-activity-based feature representations to validate that our method is indeed both content-based and user-specific. Comment: 14 pages, 5 figures, 7 tables |
Databáze: | arXiv |
Externí odkaz: |