Autor: |
Guanxiong Liu, Issa Khalil, Abdallah Khreishah, Abdulelah Algosaibi, Adel Aldalbahi, Mohammed Alnaeem, Abdulaziz Alhumam, Muhammad Anan |
Jazyk: |
angličtina |
Rok vydání: |
2020 |
Předmět: |
|
Zdroj: |
IEEE Access, Vol 8, Pp 197086-197096 (2020) |
Druh dokumentu: |
article |
ISSN: |
2169-3536 |
DOI: |
10.1109/ACCESS.2020.3029270 |
Popis: |
From recent research work, it has been shown that neural network (NN) classifiers are vulnerable to adversarial examples which contain special perturbations that are ignored by human eyes while can mislead NN classifiers. In this paper, we propose a practical black-box adversarial example generator, dubbed ManiGen. ManiGen does not require any knowledge of the inner state of the target classifier. It generates adversarial examples by searching along the manifold, which is a concise representation of input data. Through extensive set of experiments on different datasets, we show that (1) adversarial examples generated by ManiGen can mislead standalone classifiers by being as successful as the state-of-the-art white-box generator, Carlini, and (2) adversarial examples generated by ManiGen can more effectively attack classifiers with state-of-the-art defenses. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|