Zobrazeno 1 - 10
of 76
pro vyhledávání: '"Breier, Jakub"'
Embedded devices with neural network accelerators offer great versatility for their users, reducing the need to use cloud-based services. At the same time, they introduce new security challenges in the area of hardware attacks, the most prominent bei
Externí odkaz:
http://arxiv.org/abs/2407.16467
Autor:
Schröder, Jan, Breier, Jakub
Machine learning (ML) models are used in many safety- and security-critical applications nowadays. It is therefore important to measure the security of a system that uses ML as a component. This paper focuses on the field of ML, particularly the secu
Externí odkaz:
http://arxiv.org/abs/2406.12929
Fault injection attacks are a potent threat against embedded implementations of neural network models. Several attack vectors have been proposed, such as misclassification, model extraction, and trojan/backdoor planting. Most of these attacks work by
Externí odkaz:
http://arxiv.org/abs/2405.13891
Model extraction attacks have been widely applied, which can normally be used to recover confidential parameters of neural networks for multiple layers. Recently, side-channel analysis of neural networks allows parameter extraction even for networks
Externí odkaz:
http://arxiv.org/abs/2303.18132
Neural network implementations are known to be vulnerable to physical attack vectors such as fault injection attacks. As of now, these attacks were only utilized during the inference phase with the intention to cause a misclassification. In this work
Externí odkaz:
http://arxiv.org/abs/2109.11249
Adversarial attacks for machine learning models have become a highly studied topic both in academia and industry. These attacks, along with traditional security threats, can compromise confidentiality, integrity, and availability of organization's as
Externí odkaz:
http://arxiv.org/abs/2012.04884
Neural networks have been shown to be vulnerable against fault injection attacks. These attacks change the physical behavior of the device during the computation, resulting in a change of value that is currently being computed. They can be realized b
Externí odkaz:
http://arxiv.org/abs/2002.11021
Autor:
Alam, Manaar, Bag, Arnab, Roy, Debapriya Basu, Jap, Dirmanto, Breier, Jakub, Bhasin, Shivam, Mukhopadhyay, Debdeep
Neural Networks (NN) have recently emerged as backbone of several sensitive applications like automobile, medical image, security, etc. NNs inherently offer Partial Fault Tolerance (PFT) in their architecture; however, the biased PFT of NNs can lead
Externí odkaz:
http://arxiv.org/abs/1902.04560
As deep learning systems are widely adopted in safety- and security-critical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic cons
Externí odkaz:
http://arxiv.org/abs/1806.05859