Fine-Grained Video Captioning for Sports Narrative
Autor: | Bingbing Ni, Shuo Cheng, Xiaokang Yang, Minsi Wang, Huanyu Yu, Jian Zhang |
---|---|
Rok vydání: | 2018 |
Předmět: |
Closed captioning
Structure (mathematical logic) Computer science business.industry Feature extraction ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION 020207 software engineering 02 engineering and technology computer.software_genre Motion (physics) Task (project management) Metric (mathematics) 0202 electrical engineering electronic engineering information engineering Task analysis 020201 artificial intelligence & image processing Narrative Artificial intelligence business computer Natural language processing |
Zdroj: | CVPR |
DOI: | 10.1109/cvpr.2018.00629 |
Popis: | Despite recent emergence of video caption methods, how to generate fine-grained video descriptions (i.e., long and detailed commentary about individual movements of multiple subjects as well as their frequent interactions) is far from being solved, which however has great applications such as automatic sports narrative. To this end, this work makes the following contributions. First, to facilitate this novel research of fine-grained video caption, we collected a novel dataset called Fine-grained Sports Narrative dataset (FSN) that contains 2K sports videos with ground-truth narratives from YouTube.com. Second, we develop a novel performance evaluation metric named Fine-grained Captioning Evaluation (FCE) to cope with this novel task. Considered as an extension of the widely used METEOR, it measures not only the linguistic performance but also whether the action details and their temporal orders are correctly described. Third, we propose a new framework for fine-grained sports narrative task. This network features three branches: 1) a spatio-temporal entity localization and role discovering sub-network; 2) a fine-grained action modeling sub-network for local skeleton motion description; and 3) a group relationship modeling sub-network to model interactions between players. We further fuse the features and decode them into long narratives by a hierarchically recurrent structure. Extensive experiments on the FSN dataset demonstrates the validity of the proposed framework for fine-grained video caption. |
Databáze: | OpenAIRE |
Externí odkaz: |