Learning Adaptive Weight Masking for Adversarial Examples
Autor: | Thomas Trappenberg, Michael Traynor, Sageev Oore, Yoshimasa Kubo |
---|---|
Rok vydání: | 2019 |
Předmět: |
Masking (art)
Pointwise business.industry Computer science 05 social sciences ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Pattern recognition Sigmoid function 010501 environmental sciences 01 natural sciences Convolutional neural network 0502 economics and business Artificial intelligence 050207 economics Layer (object-oriented design) business 0105 earth and related environmental sciences |
Zdroj: | IJCNN |
DOI: | 10.1109/ijcnn.2019.8852298 |
Popis: | Adding small, well crafted perturbations to the pixel values of input images leads to adversarial examples, so called because these perturbed images can drastically affect the accuracy of machine learning classifiers. Defenses against such attacks are being studied, often with varying results. In this study, we introduce a model called the Stochastic-Gated Partially Binarized Network (SGBN ), that incorporates binarization and input-dependent stochasticity. In particular, a gate module learns the probability that individual weights in corresponding convolutional filters should be masked (turned on or off). The gate module itself consists of a shallow convolutional neural network, and its sigmoid outputs are stochastically binarized and pointwise multiplied with corresponding filters in the convolutional layer of the main network. We test and compare our model with several related approaches, and to try to gain an understanding of our model, we visualize activations of some of the gating network outputs and their corresponding filters. |
Databáze: | OpenAIRE |
Externí odkaz: |