Defending Multimodal Fusion Models against Single-Source Adversaries
Autor: | Wan-Yi Lin, Zico Kolter, Filipe Condessa, Karren Yang, Manash Pratim Barman |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2022 |
Předmět: |
FOS: Computer and information sciences
Multimodal fusion Computer Science - Machine Learning Modality (human–computer interaction) Modalities Computer Science - Cryptography and Security Artificial neural network Computer science business.industry Computer Vision and Pattern Recognition (cs.CV) Sentiment analysis Computer Science - Computer Vision and Pattern Recognition Object detection Machine Learning (cs.LG) 68T01 68T45 Robustness (computer science) Artificial intelligence business Cryptography and Security (cs.CR) Vulnerability (computing) |
Zdroj: | CVPR |
Popis: | Beyond achieving high performance across many vision tasks, multimodal models are expected to be robust to single-source faults due to the availability of redundant information between modalities. In this paper, we investigate the robustness of multimodal neural networks against worst-case (i.e., adversarial) perturbations on a single modality. We first show that standard multimodal fusion models are vulnerable to single-source adversaries: an attack on any single modality can overcome the correct information from multiple unperturbed modalities and cause the model to fail. This surprising vulnerability holds across diverse multimodal tasks and necessitates a solution. Motivated by this finding, we propose an adversarially robust fusion strategy that trains the model to compare information coming from all the input sources, detect inconsistencies in the perturbed modality compared to the other modalities, and only allow information from the unperturbed modalities to pass through. Our approach significantly improves on state-of-the-art methods in single-source robustness, achieving gains of 7.8-25.2% on action recognition, 19.7-48.2% on object detection, and 1.6-6.7% on sentiment analysis, without degrading performance on unperturbed (i.e., clean) data. CVPR 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |