Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Autor: | Debayan Deb, Han Xu, Jiliang Tang, Anil K. Jain, Haochen Liu, Yao Ma, Hui Liu |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer Science - Cryptography and Security Computer science Machine Learning (stat.ML) 02 engineering and technology 010501 environmental sciences Computer security computer.software_genre 01 natural sciences Data type Machine Learning (cs.LG) Adversarial system Statistics - Machine Learning Robustness (computer science) 0202 electrical engineering electronic engineering information engineering 0105 earth and related environmental sciences business.industry Computer Applications Applied Mathematics Deep learning Computer Science Applications Control and Systems Engineering Modeling and Simulation Deep neural networks 020201 artificial intelligence & image processing Artificial intelligence business computer Cryptography and Security (cs.CR) |
DOI: | 10.48550/arxiv.1909.08072 |
Popis: | Deep neural networks (DNN) have achieved unprecedented success in numerous machine learning tasks in various domains. However, the existence of adversarial examples has raised concerns about applying deep learning to safety-critical applications. As a result, we have witnessed increasing interests in studying attack and defense mechanisms for DNN models on different data types, such as images, graphs and text. Thus, it is necessary to provide a systematic and comprehensive overview of the main threats of attacks and the success of corresponding countermeasures. In this survey, we review the state of the art algorithms for generating adversarial examples and the countermeasures against adversarial examples, for the three popular data types, i.e., images, graphs and text. Comment: survey, adversarial attacks, defenses |
Databáze: | OpenAIRE |
Externí odkaz: |