Image Captioning using Deep Stacked LSTMs, Contextual Word Embeddings and Data Augmentation
Autor: | Katiyar, Sulabh, Borgohain, Samir Kumar |
---|---|
Rok vydání: | 2021 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Image Captioning, or the automatic generation of descriptions for images, is one of the core problems in Computer Vision and has seen considerable progress using Deep Learning Techniques. We propose to use Inception-ResNet Convolutional Neural Network as encoder to extract features from images, Hierarchical Context based Word Embeddings for word representations and a Deep Stacked Long Short Term Memory network as decoder, in addition to using Image Data Augmentation to avoid over-fitting. For data Augmentation, we use Horizontal and Vertical Flipping in addition to Perspective Transformations on the images. We evaluate our proposed methods with two image captioning frameworks- Encoder-Decoder and Soft Attention. Evaluation on widely used metrics have shown that our approach leads to considerable improvement in model performance. Comment: Accepted for publication in Springer Book Series: Advances in Intelligent Systems and Computing - ISSN 2194-5357. Upon publication, this article will point to the published one |
Databáze: | arXiv |
Externí odkaz: |