Autor: |
Rajasekharan D, Rangarajan N, Patnaik S, Sinanoglu O, Chauhan YS |
Jazyk: |
angličtina |
Zdroj: |
IEEE transactions on neural networks and learning systems [IEEE Trans Neural Netw Learn Syst] 2023 Sep; Vol. 34 (9), pp. 5693-5707. Date of Electronic Publication: 2023 Sep 01. |
DOI: |
10.1109/TNNLS.2021.3130884 |
Abstrakt: |
Deep neural networks (DNNs) form a critical infrastructure supporting various systems, spanning from the iPhone neural engine to imaging satellites and drones. The design of these neural cores is often proprietary or a military secret. Nevertheless, they remain vulnerable to model replication attacks that seek to reverse engineer the network's synaptic weights. In this article, we propose SCANet (Superparamagnetic-MTJ Crossbar Array Networks), a novel defense mechanism against such model stealing attacks by utilizing the innate stochasticity in superparamagnets. When used as the synapse in DNNs, superparamagnetic magnetic tunnel junctions (s-MTJs) are shown to be significantly more secure than prior memristor-based solutions. The thermally induced telegraphic switching in the s-MTJs is robust and uncontrollable, thus thwarting the attackers from obtaining sensitive data from the network. Using a mixture of both superparamagnetic and conventional MTJs in the neural network (NN), the designer can optimize the time period between the weight updation and the power consumed by the system. Furthermore, we propose a modified NN architecture that can prevent replication attacks while minimizing power consumption. We investigate the effect of the number of layers in the deep network and the number of neurons in each layer on the sharpness of accuracy degradation when the network is under attack. We also explore the efficacy of SCANet in real-time scenarios, using a case study on object detection. |
Databáze: |
MEDLINE |
Externí odkaz: |
|