Defending AI Models Against Adversarial Attacks in Smart Grids Using Deep Learning

Autor: Gabriel Avelino Sampedro, Stephen Ojo, Moez Krichen, Meznah A. Alamro, Alaeddine Mihoub, Vincent Karovic
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: IEEE Access, Vol 12, Pp 157408-157417 (2024)
Druh dokumentu: article
ISSN: 2169-3536
DOI: 10.1109/ACCESS.2024.3473531
Popis: Adversarial attacks involve manipulating data to trick Artificial Intelligence (AI) models, making false predictions or classifications or even disrupting the normal functions of the smart grid. This can be done by providing the wrong information to the models, hence producing wrong predictions and therefore leading to instabilities, power imbalances and overall operational failure. Thus, employing an AI model is vital for controlling energy usage, identifying and even predicting equipment failures, and accurately determining power availability. Thus, their input dependency does not enable them to withstand cyber-attacks, which compromises the stability of the grid. These attacks affect energy control, cause losses and intrude into the critical infrastructure part of the smart grid, hence the call for enhanced smart grid protection. This study first generates a novel adversarial attack dataset on smart grids with three attacks: adversarial perturbation, backdoor injection, DOS attacks, and one benign class. Next, it provides a fine-tuned Deep Neural Network (DNN) model to significantly improve resistance against adversarial attacks on smart grids. Results from the various Machine Learning (ML) and DNN algorithms showed accuracy varying between 29.10% to 73.9%, with DNN recording the highest levels of accuracy. This shows how the approach can be utilised in the prediction of such attacks and how the grid can be protected and secured.
Databáze: Directory of Open Access Journals