A Survey on Attacks and Their Countermeasures in Deep Learning: Applications in Deep Neural Networks, Federated, Transfer, and Deep Reinforcement Learning

Autor: Haider Ali, Dian Chen, Matthew Harrington, Nathaniel Salazar, Mohannad Al Ameedi, Ahmad Faraz Khan, Ali R. Butt, Jin-Hee Cho
Jazyk: angličtina
Rok vydání: 2023
Předmět:
Zdroj: IEEE Access, Vol 11, Pp 120095-120130 (2023)
Druh dokumentu: article
ISSN: 2169-3536
DOI: 10.1109/ACCESS.2023.3326410
Popis: Deep Learning (DL) techniques are being used in various critical applications like self-driving cars. DL techniques such as Deep Neural Networks (DNN), Deep Reinforcement Learning (DRL), Federated Learning (FL), and Transfer Learning (TL) are prone to adversarial attacks, which can make the DL techniques perform poorly. Developing such attacks and their countermeasures is the prerequisite for making artificial intelligence techniques robust, secure, and deployable. Previous survey papers only focused on one or two techniques and are outdated. They do not discuss application domains, datasets, and testbeds in detail. There is also a need to discuss the commonalities and differences among DL techniques. In this paper, we comprehensively discussed the attacks and defenses in four popular DL models, including DNN, DRL, FL, and TL. We also highlighted the application domains, datasets, metrics, and testbeds in these fields. One of our key contributions is to discuss the commonalities and differences among these DL techniques. Insights, lessons, and future research directions are also highlighted in detail.
Databáze: Directory of Open Access Journals