Abstrakt: |
The construction of undetectable adversarial examples with few perturbances remains a difficult problem in adversarial attacks. At present, most solutions use the standard gradient optimization algorithm to build adversarial examples by applying global perturbations to benign samples and then launch attacks on the targets (e.g., face recognition systems). However, when the perturbance size is limited, the performance of these approaches suffers substantially. The content of crucial places in an image, on the other hand, will impact the final prediction; if these areas can be investigated and limited perturbances introduced, an acceptable adversarial example will be constructed. Based on the foregoing research, this article offers a dual attention adversarial network (DAAN) to produce adversarial examples with limited perturbations. DAAN initially searches for effective areas in an input image using the spatial attention network and channel attention network, and then creates space and channel weights. Following that, these weights direct an encoder and a decoder to generate effective perturbation, which is then combined with the input to produce an adversarial example. Finally, the discriminator determines if the created adversarial examples are true or false, and the attacked model is utilized to determine whether the generated samples fit the attack targets. Extensive studies on various datasets show that DAAN not only delivers the best attack performance across all comparison algorithms with few perturbations, but it can also significantly improve the defensiveness of the attacked models. |