Zobrazeno 1 - 10
of 252 973
pro vyhledávání: '"$p$-robustness"'
This paper investigates the adversarial robustness of Deep Neural Networks (DNNs) using Information Bottleneck (IB) objectives for task-oriented communication systems. We empirically demonstrate that while IB-based approaches provide baseline resilie
Externí odkaz:
http://arxiv.org/abs/2412.10265
Determining the robustness of deep learning models is an established and ongoing challenge within automated decision-making systems. With the advent and success of techniques that enable advanced deep learning (DL), these models are being used in wid
Externí odkaz:
http://arxiv.org/abs/2412.09795
Deep hash-based retrieval techniques are widely used in facial retrieval systems to improve the efficiency of facial matching. However, it also brings the risk of privacy leakage. Deep hash models are easily influenced by adversarial examples, which
Externí odkaz:
http://arxiv.org/abs/2412.09692
Quantum key distribution (QKD) offers a theoretically secure method to share secret keys, yet practical implementations face challenges due to noise and loss over long-distance channels. Traditional QKD protocols require extensive noise compensation,
Externí odkaz:
http://arxiv.org/abs/2412.08694
Autor:
Wani, Farooq Ahmad, Bucarelli, Maria Sofia, Di Francesco, Andrea Giuseppe, Pryymak, Oleksandr, Silvestri, Fabrizio
Graph Neural Networks (GNNs) are powerful at solving graph classification tasks, yet applied problems often contain noisy labels. In this work, we study GNN robustness to label noise, demonstrate GNN failure modes when models struggle to generalise o
Externí odkaz:
http://arxiv.org/abs/2412.08419
Autor:
Fadini, G., Coros, S.
We present a novel approach to quantifying and optimizing stability in robotic systems based on the Lyapunov exponents addressing an open challenge in the field of robot analysis, design, and optimization. Our method leverages differentiable simulati
Externí odkaz:
http://arxiv.org/abs/2412.06776
Robustness and generalization ability of machine learning models are of utmost importance in various application domains. There is a wide interest in efficient ways to analyze those properties. One important direction is to analyze connection between
Externí odkaz:
http://arxiv.org/abs/2412.06381
Deep learning-based image denoising models demonstrate remarkable performance, but their lack of robustness analysis remains a significant concern. A major issue is that these models are susceptible to adversarial attacks, where small, carefully craf
Externí odkaz:
http://arxiv.org/abs/2412.05943
Autor:
Capozzi, Gianluca, Tang, Tong, Wan, Jie, Yang, Ziqi, D'Elia, Daniele Cono, Di Luna, Giuseppe Antonio, Cavallaro, Lorenzo, Querzoni, Leonardo
Binary function similarity, which often relies on learning-based algorithms to identify what functions in a pool are most similar to a given query function, is a sought-after topic in different communities, including machine learning, software engine
Externí odkaz:
http://arxiv.org/abs/2412.04163
Publikováno v:
2024 39th Conference on Design of Circuits and Integrated Systems (DCIS)
Machine learning-based embedded systems employed in safety-critical applications such as aerospace and autonomous driving need to be robust against perturbations produced by soft errors. Soft errors are an increasing concern in modern digital process
Externí odkaz:
http://arxiv.org/abs/2412.03682