Zobrazeno 1 - 10
of 119
pro vyhledávání: '"Alouani Ihsen"'
While machine learning (ML) models are becoming mainstream, especially in sensitive application areas, the risk of data leakage has become a growing concern. Attacks like membership inference (MIA) have shown that trained models can reveal sensitive
Externí odkaz:
http://arxiv.org/abs/2411.06613
The massive deployment of Machine Learning (ML) models has been accompanied by the emergence of several attacks that threaten their trustworthiness and raise ethical and societal concerns such as invasion of privacy, discrimination risks, and lack of
Externí odkaz:
http://arxiv.org/abs/2406.01708
As spiking neural networks (SNNs) gain traction in deploying neuromorphic computing solutions, protecting their intellectual property (IP) has become crucial. Without adequate safeguards, proprietary SNN architectures are at risk of theft, replicatio
Externí odkaz:
http://arxiv.org/abs/2405.04049
Monocular depth estimation (MDE) has advanced significantly, primarily through the integration of convolutional neural networks (CNNs) and more recently, Transformers. However, concerns about their susceptibility to adversarial attacks have emerged,
Externí odkaz:
http://arxiv.org/abs/2403.11515
With the mainstream integration of machine learning into security-sensitive domains such as healthcare and finance, concerns about data privacy have intensified. Conventional artificial neural networks (ANNs) have been found vulnerable to several att
Externí odkaz:
http://arxiv.org/abs/2402.00906
The globalization of the Integrated Circuit (IC) supply chain, driven by time-to-market and cost considerations, has made ICs vulnerable to hardware Trojans (HTs). Against this threat, a promising approach is to use Machine Learning (ML)-based side-c
Externí odkaz:
http://arxiv.org/abs/2401.02342
In this paper, we investigate the following question: Can we obtain adversarially-trained models without training on adversarial examples? Our intuition is that training a model with inherent stochasticity, i.e., optimizing the parameters by minimizi
Externí odkaz:
http://arxiv.org/abs/2312.08877
Adversarial patches exemplify the tangible manifestation of the threat posed by adversarial attacks on Machine Learning (ML) models in real-world scenarios. Robustness against these attacks is of the utmost importance when designing computer vision a
Externí odkaz:
http://arxiv.org/abs/2312.00173
The latest generation of transformer-based vision models has proven to be superior to Convolutional Neural Network (CNN)-based models across several vision tasks, largely attributed to their remarkable prowess in relation modeling. Deformable vision
Externí odkaz:
http://arxiv.org/abs/2311.12914
Autor:
Mamun, Md Abdullah Al, Alam, Quazi Mishkatul, Shayegani, Erfan, Zaree, Pedram, Alouani, Ihsen, Abu-Ghazaleh, Nael
Machine learning (ML) models are overparameterized to support generality and avoid overfitting. The state of these parameters is essentially a "don't-care" with respect to the primary model provided that this state does not interfere with the primary
Externí odkaz:
http://arxiv.org/abs/2307.08811