Unified Vision-Language Pre-Training for Image Captioning and VQA
Autor: | Hamid Palangi, Jianfeng Gao, Jason J. Corso, Lei Zhang, Houdong Hu, Luowei Zhou |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Closed captioning business.industry Computer science Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition 02 engineering and technology General Medicine 010501 environmental sciences computer.software_genre 01 natural sciences 0202 electrical engineering electronic engineering information engineering Question answering Unsupervised learning 020201 artificial intelligence & image processing Artificial intelligence business computer Encoder Decoding methods Natural language processing 0105 earth and related environmental sciences Transformer (machine learning model) |
Zdroj: | AAAI |
ISSN: | 2374-3468 2159-5399 |
Popis: | This paper presents a unified Vision-Language Pre-training (VLP) model. The model is unified in that (1) it can be fine-tuned for either vision-language generation (e.g., image captioning) or understanding (e.g., visual question answering) tasks, and (2) it uses a shared multi-layer transformer network for both encoding and decoding, which differs from many existing methods where the encoder and decoder are implemented using separate models. The unified VLP model is pre-trained on a large amount of image-text pairs using the unsupervised learning objectives of two tasks: bidirectional and sequence-to-sequence (seq2seq) masked vision-language prediction. The two tasks differ solely in what context the prediction conditions on. This is controlled by utilizing specific self-attention masks for the shared transformer network. To the best of our knowledge, VLP is the first reported model that achieves state-of-the-art results on both vision-language generation and understanding tasks, as disparate as image captioning and visual question answering, across three challenging benchmark datasets: COCO Captions, Flickr30k Captions, and VQA 2.0. The code and the pre-trained models are available at https://github.com/LuoweiZhou/VLP. Comment: AAAI 2020 camera-ready version. The code and the pre-trained models are available at https://github.com/LuoweiZhou/VLP |
Databáze: | OpenAIRE |
Externí odkaz: |