Developing a deep neural network-based encoder-decoder framework in automatic image captioning systems

Autor: Md. Mijanur Rahman, Ashik Uzzaman, Sadia Islam Sami, Fatema Khatun
Rok vydání: 2022
DOI: 10.21203/rs.3.rs-2046359/v1
Popis: This study is concerned with the development of a deep neural network-based framework, including a “convolutional neural network (CNN)” encoder and a “Long Short-Term Memory (LSTM)” decoder in an automatic image captioning application. The proposed model percepts information points in a picture and their relationship to one another in the viewpoint. Firstly, a CNN encoder excels at retaining spatial information and recognizing objects in images by extracting features to produce vocabulary that describes the photos. Secondly, an LSTM network decoder is used for predicting words and creating meaningful sentences from the built keywords. Thus, in the proposed neural network-based system, the VGG-19 model is presented for defining the proposed model as an image feature extractor and sequence processor, and then the LSTM model provides a fixed-length output vector as a final prediction. A variety of images from several open-source datasets, such as Flickr 8k, Flickr 30k, and MS COCO, were explored and used for training as well as testing the proposed model. The experiment was done on Python with Keras and TensorFlow backend. It demonstrated the automatic image captioning and evaluated the performance of the proposed model using the BLEU (BiLingual Evaluation Understudy) metric.
Databáze: OpenAIRE