Constrained LSTM and Residual Attention for Image Captioning
Autor: | Xinlong Lu, Haifeng Hu, Songlong Xing, Liang Yang |
---|---|
Rok vydání: | 2020 |
Předmět: |
Closed captioning
Computer Networks and Communications business.industry Computer science 020208 electrical & electronic engineering 02 engineering and technology computer.software_genre Object detection Focus (linguistics) Hardware and Architecture Visual Objects 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Relevance (information retrieval) Language model Artificial intelligence Representation (mathematics) business computer Natural language processing Sentence computer.programming_language |
Zdroj: | ACM Transactions on Multimedia Computing, Communications, and Applications. 16:1-18 |
ISSN: | 1551-6865 1551-6857 |
DOI: | 10.1145/3386725 |
Popis: | Visual structure and syntactic structure are essential in images and texts, respectively. Visual structure depicts both entities in an image and their interactions, whereas syntactic structure in texts can reflect the part-of-speech constraints between adjacent words. Most existing methods either use visual global representation to guide the language model or generate captions without considering the relationships of different entities or adjacent words. Thus, their language models lack relevance in both visual and syntactic structure. To solve this problem, we propose a model that aligns the language model to certain visual structure and also constrains it with a specific part-of-speech template. In addition, most methods exploit the latent relationship between words in a sentence and pre-extracted visual regions in an image yet ignore the effects of unextracted regions on predicted words. We develop a residual attention mechanism to simultaneously focus on the pre-extracted visual objects and unextracted regions in an image. Residual attention is capable of capturing precise regions of an image corresponding to the predicted words considering both the effects of visual objects and unextracted regions. The effectiveness of our entire framework and each proposed module are verified on two classical datasets: MSCOCO and Flickr30k. Our framework is on par with or even better than the state-of-the-art methods and achieves superior performance on COCO captioning Leaderboard. |
Databáze: | OpenAIRE |
Externí odkaz: |