Informative Visual Storytelling with Cross-modal Rules
Autor: | Yueting Zhuang, Siliang Tang, Fei Wu, Jiacheng Li, Haizhou Shi |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Computation and Language business.industry Computer science Transferability 02 engineering and technology 010501 environmental sciences computer.software_genre 01 natural sciences Multimedia (cs.MM) Modal H.5.1 0202 electrical engineering electronic engineering information engineering Visual storytelling Leverage (statistics) 020201 artificial intelligence & image processing Artificial intelligence business computer Computation and Language (cs.CL) Natural language processing Computer Science - Multimedia 0105 earth and related environmental sciences Storytelling |
Zdroj: | ACM Multimedia |
DOI: | 10.48550/arxiv.1907.03240 |
Popis: | Existing methods in the Visual Storytelling field often suffer from the problem of generating general descriptions, while the image contains a lot of meaningful contents remaining unnoticed. The failure of informative story generation can be concluded to the model's incompetence of capturing enough meaningful concepts. The categories of these concepts include entities, attributes, actions, and events, which are in some cases crucial to grounded storytelling. To solve this problem, we propose a method to mine the cross-modal rules to help the model infer these informative concepts given certain visual input. We first build the multimodal transactions by concatenating the CNN activations and the word indices. Then we use the association rule mining algorithm to mine the cross-modal rules, which will be used for the concept inference. With the help of the cross-modal rules, the generated stories are more grounded and informative. Besides, our proposed method holds the advantages of interpretation, expandability, and transferability, indicating potential for wider application. Finally, we leverage these concepts in our encoder-decoder framework with the attention mechanism. We conduct several experiments on the VIsual StoryTelling~(VIST) dataset, the results of which demonstrate the effectiveness of our approach in terms of both automatic metrics and human evaluation. Additional experiments are also conducted showing that our mined cross-modal rules as additional knowledge helps the model gain better performance when trained on a small dataset. Comment: 9 pages, to appear in ACM Multimedia 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |