Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning
Autor: | Cho-Jui Hsieh, Pin-Yu Chen, Huan Zhang, Hongge Chen, Jinfeng Yi |
---|---|
Rok vydání: | 2017 |
Předmět: |
Closed captioning
FOS: Computer and information sciences Machine vision Computer science business.industry Computer Vision and Pattern Recognition (cs.CV) Feature extraction Computer Science - Computer Vision and Pattern Recognition ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION 02 engineering and technology 010501 environmental sciences 01 natural sciences Convolutional neural network Image (mathematics) Visual language Recurrent neural network Robustness (computer science) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Artificial intelligence business 0105 earth and related environmental sciences |
Zdroj: | ACL (1) |
DOI: | 10.48550/arxiv.1712.02051 |
Popis: | Visual language grounding is widely studied in modern neural image captioning systems, which typically adopts an encoder-decoder framework consisting of two principal components: a convolutional neural network (CNN) for image feature extraction and a recurrent neural network (RNN) for language caption generation. To study the robustness of language grounding to adversarial perturbations in machine vision and perception, we propose Show-and-Fool, a novel algorithm for crafting adversarial examples in neural image captioning. The proposed algorithm provides two evaluation approaches, which check whether neural image captioning systems can be mislead to output some randomly chosen captions or keywords. Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems. Consequently, our approach leads to new robustness implications of neural image captioning and novel insights in visual language grounding. Comment: Accepted by 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018). Hongge Chen and Huan Zhang contribute equally to this work |
Databáze: | OpenAIRE |
Externí odkaz: |