Zobrazeno 1 - 10
of 3 588
pro vyhledávání: '"DEMONTIS A."'
Autor:
Piras, Giorgio, Pintor, Maura, Demontis, Ambra, Biggio, Battista, Giacinto, Giorgio, Roli, Fabio
Recent work has proposed neural network pruning techniques to reduce the size of a network while preserving robustness against adversarial examples, i.e., well-crafted inputs inducing a misclassification. These methods, which we refer to as adversari
Externí odkaz:
http://arxiv.org/abs/2409.01249
Autor:
Mura, Raffaele, Floris, Giuseppe, Scionis, Luca, Piras, Giorgio, Pintor, Maura, Demontis, Ambra, Giacinto, Giorgio, Biggio, Battista, Roli, Fabio
Gradient-based attacks are a primary tool to evaluate robustness of machine-learning models. However, many attacks tend to provide overly-optimistic evaluations as they use fixed loss functions, optimizers, step-size schedulers, and default hyperpara
Externí odkaz:
http://arxiv.org/abs/2407.08806
A Hybrid Training-time and Run-time Defense Against Adversarial Attacks in Modulation Classification
Autor:
Zhang, Lu, Lambotharan, Sangarapillai, Zheng, Gan, Liao, Guisheng, Demontis, Ambra, Roli, Fabio
Motivated by the superior performance of deep learning in many applications including computer vision and natural language processing, several recent studies have focused on applying deep neural network for devising future generations of wireless net
Externí odkaz:
http://arxiv.org/abs/2407.06807
Autor:
Prete, Domenic, Demontis, Valeria, Zannier, Valentina, Sorba, Lucia, Beltram, Fabio, Rossella, Francesco
Achieving stable, high-quality quantum dots has proven challenging within device architectures rooted in conventional solid-state device fabrication paradigms. In fact, these are grappled with complex protocols in order to balance ease of realization
Externí odkaz:
http://arxiv.org/abs/2406.16363
Autor:
Chen, Zhang, Demetrio, Luca, Gupta, Srishti, Feng, Xiaoyi, Xia, Zhaoqiang, Cinà, Antonio Emanuele, Pintor, Maura, Oneto, Luca, Demontis, Ambra, Biggio, Battista, Roli, Fabio
Thanks to their extensive capacity, over-parameterized neural networks exhibit superior predictive capabilities and generalization. However, having a large parameter space is considered one of the main suspects of the neural networks' vulnerability t
Externí odkaz:
http://arxiv.org/abs/2406.10090
Autor:
Demontis, Roberto
We prove that the conjecture made by Peter Frankl in the late 1970s is true. In other words for every finite union-closed family which contains a non?empty set, there is an element that belongs to at least half of its m
Externí odkaz:
http://arxiv.org/abs/2405.03731
Autor:
Cinà, Antonio Emanuele, Rony, Jérôme, Pintor, Maura, Demetrio, Luca, Demontis, Ambra, Biggio, Battista, Ayed, Ismail Ben, Roli, Fabio
Adversarial examples are typically optimized with gradient-based attacks. While novel attacks are continuously proposed, each is shown to outperform its predecessors using different experimental setups, hyperparameter settings, and number of forward
Externí odkaz:
http://arxiv.org/abs/2404.19460
Autor:
Demontis, F., Pennisi, S.
We consider two possible ways, i.e., the Maxwellian Iteration and the Chapman-Enskog Method, to recover Relativistic Ordinary Thermodynamics from Relativistic Extended Thermodynamics of Polyatomic gases with N moments. Both of these methods give the
Externí odkaz:
http://arxiv.org/abs/2310.10881
Autor:
Floris, Giuseppe, Mura, Raffaele, Scionis, Luca, Piras, Giorgio, Pintor, Maura, Demontis, Ambra, Biggio, Battista
Evaluating the adversarial robustness of machine learning models using gradient-based attacks is challenging. In this work, we show that hyperparameter optimization can improve fast minimum-norm attacks by automating the selection of the loss functio
Externí odkaz:
http://arxiv.org/abs/2310.08177
Neural network pruning has shown to be an effective technique for reducing the network size, trading desirable properties like generalization and robustness to adversarial attacks for higher sparsity. Recent work has claimed that adversarial pruning
Externí odkaz:
http://arxiv.org/abs/2310.08073