Zobrazeno 1 - 10
of 100
pro vyhledávání: '"Piras, Giorgio"'
Autor:
Ledda, Emanuele, Scodeller, Giovanni, Angioni, Daniele, Piras, Giorgio, Cinà, Antonio Emanuele, Fumera, Giorgio, Biggio, Battista, Roli, Fabio
In learning problems, the noise inherent to the task at hand hinders the possibility to infer without a certain degree of uncertainty. Quantifying this uncertainty, regardless of its wide use, assumes high relevance for security-sensitive application
Externí odkaz:
http://arxiv.org/abs/2410.21952
Autor:
Piras, Giorgio, Pintor, Maura, Demontis, Ambra, Biggio, Battista, Giacinto, Giorgio, Roli, Fabio
Recent work has proposed neural network pruning techniques to reduce the size of a network while preserving robustness against adversarial examples, i.e., well-crafted inputs inducing a misclassification. These methods, which we refer to as adversari
Externí odkaz:
http://arxiv.org/abs/2409.01249
Autor:
Mura, Raffaele, Floris, Giuseppe, Scionis, Luca, Piras, Giorgio, Pintor, Maura, Demontis, Ambra, Giacinto, Giorgio, Biggio, Battista, Roli, Fabio
Gradient-based attacks are a primary tool to evaluate robustness of machine-learning models. However, many attacks tend to provide overly-optimistic evaluations as they use fixed loss functions, optimizers, step-size schedulers, and default hyperpara
Externí odkaz:
http://arxiv.org/abs/2407.08806
Autor:
Floris, Giuseppe, Mura, Raffaele, Scionis, Luca, Piras, Giorgio, Pintor, Maura, Demontis, Ambra, Biggio, Battista
Evaluating the adversarial robustness of machine learning models using gradient-based attacks is challenging. In this work, we show that hyperparameter optimization can improve fast minimum-norm attacks by automating the selection of the loss functio
Externí odkaz:
http://arxiv.org/abs/2310.08177
Neural network pruning has shown to be an effective technique for reducing the network size, trading desirable properties like generalization and robustness to adversarial attacks for higher sparsity. Recent work has claimed that adversarial pruning
Externí odkaz:
http://arxiv.org/abs/2310.08073
Autor:
Ledda, Emanuele, Angioni, Daniele, Piras, Giorgio, Fumera, Giorgio, Biggio, Battista, Roli, Fabio
Machine-learning models can be fooled by adversarial examples, i.e., carefully-crafted input perturbations that force models to output wrong predictions. While uncertainty quantification has been recently proposed to detect adversarial inputs, under
Externí odkaz:
http://arxiv.org/abs/2309.10586
One of the most common causes of lack of continuity of online systems stems from a widely popular Cyber Attack known as Distributed Denial of Service (DDoS), in which a network of infected devices (botnet) gets exploited to flood the computational ca
Externí odkaz:
http://arxiv.org/abs/2208.05285
Autor:
Mura, Raffaele, Floris, Giuseppe, Scionis, Luca, Piras, Giorgio, Pintor, Maura, Demontis, Ambra, Giacinto, Giorgio, Biggio, Battista, Roli, Fabio
Publikováno v:
In Neurocomputing 1 February 2025 616
Autor:
PIRAS, GIORGIO
Publikováno v:
Bulletin of the Institute of Classical Studies, 2017 Jan 01. 60(2), 8-20.
Externí odkaz:
https://www.jstor.org/stable/48554655
Autor:
Piras, Giorgio
Publikováno v:
Scienze Dell'antichità; 2024, Vol. 30 Issue 1, pV-X, 6p