Zobrazeno 1 - 10
of 3 672
pro vyhledávání: '"A, Guesmi"'
Autor:
Guesmi, Amira, Shafique, Muhammad
Autonomous vehicles (AVs) rely heavily on LiDAR (Light Detection and Ranging) systems for accurate perception and navigation, providing high-resolution 3D environmental data that is crucial for object detection and classification. However, LiDAR syst
Externí odkaz:
http://arxiv.org/abs/2409.20426
Adversarial attacks pose a significant challenge to deploying deep learning models in safety-critical applications. Maintaining model robustness while ensuring interpretability is vital for fostering trust and comprehension in these models. This stud
Externí odkaz:
http://arxiv.org/abs/2405.06278
Publikováno v:
Proceedings of the 1st ContinualAI Unconference, 2023, PMLR 249:62-82, 2024
Continual learning (CL) has spurred the development of several methods aimed at consolidating previous knowledge across sequential learning. Yet, the evaluations of these methods have primarily focused on the final output, such as changes in the accu
Externí odkaz:
http://arxiv.org/abs/2405.03244
Monocular depth estimation (MDE) has advanced significantly, primarily through the integration of convolutional neural networks (CNNs) and more recently, Transformers. However, concerns about their susceptibility to adversarial attacks have emerged,
Externí odkaz:
http://arxiv.org/abs/2403.11515
Adversarial patch attacks pose a significant threat to the practical deployment of deep learning systems. However, existing research primarily focuses on image pre-processing defenses, which often result in reduced classification accuracy for clean i
Externí odkaz:
http://arxiv.org/abs/2402.06249
Autor:
Chattopadhyay, Nandish, Guesmi, Amira, Hanif, Muhammad Abdullah, Ouni, Bassem, Shafique, Muhammad
Adversarial patch-based attacks have shown to be a major deterrent towards the reliable use of machine learning models. These attacks involve the strategic modification of localized patches or specific image areas to deceive trained machine learning
Externí odkaz:
http://arxiv.org/abs/2311.12211
Autor:
Chattopadhyay, Nandish, Guesmi, Amira, Hanif, Muhammad Abdullah, Ouni, Bassem, Shafique, Muhammad
Adversarial attacks present a significant challenge to the dependable deployment of machine learning models, with patch-based attacks being particularly potent. These attacks introduce adversarial perturbations in localized regions of an image, decei
Externí odkaz:
http://arxiv.org/abs/2311.12084
In this paper, we present a comprehensive survey of the current trends focusing specifically on physical adversarial attacks. We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristi
Externí odkaz:
http://arxiv.org/abs/2308.06173
In this paper, we investigate the vulnerability of MDE to adversarial patches. We propose a novel \underline{S}tealthy \underline{A}dversarial \underline{A}ttacks on \underline{M}DE (SAAM) that compromises MDE by either corrupting the estimated dista
Externí odkaz:
http://arxiv.org/abs/2308.03108
Publikováno v:
Financial Innovation, Vol 10, Iss 1, Pp 1-31 (2024)
Abstract This paper employs wavelet coherence, Cross-Quantilogram (CQ), and Time-Varying Parameter Vector-Autoregression (TVP-VAR) estimation strategies to investigate the dependence structure and connectedness between investments in artificial intel
Externí odkaz:
https://doaj.org/article/bf8ff252e00549da8fff0283e104df08