Zobrazeno 1 - 10
of 29
pro vyhledávání: '"Sotgiu, Angelo"'
Autor:
Pintor, Maura, Angioni, Daniele, Sotgiu, Angelo, Demetrio, Luca, Demontis, Ambra, Biggio, Battista, Roli, Fabio
Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machine-learning model to misclassify it. However, their optimization is computationally demanding, and requires careful hyperparameter tuning, potentially leadi
Externí odkaz:
http://arxiv.org/abs/2203.04412
Autor:
Pintor, Maura, Demetrio, Luca, Sotgiu, Angelo, Demontis, Ambra, Carlini, Nicholas, Biggio, Battista, Roli, Fabio
Evaluating robustness of machine-learning models to adversarial examples is a challenging problem. Many defenses have been shown to provide a false sense of robustness by causing gradient-based attacks to fail, and they have been broken under more ri
Externí odkaz:
http://arxiv.org/abs/2106.09947
Deep neural networks are vulnerable to adversarial examples, i.e., carefully-crafted inputs that mislead classification at test time. Recent defenses have been shown to improve adversarial robustness by detecting anomalous deviations from legitimate
Externí odkaz:
http://arxiv.org/abs/2010.09119
Autor:
Melacci, Stefano, Ciravegna, Gabriele, Sotgiu, Angelo, Demontis, Ambra, Biggio, Battista, Gori, Marco, Roli, Fabio
Publikováno v:
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 12, pp. 9944-9959, 1 Dec. 2022
Adversarial attacks on machine learning-based classifiers, along with defense mechanisms, have been widely studied in the context of single-label classification problems. In this paper, we shift the attention to multi-label classification, where the
Externí odkaz:
http://arxiv.org/abs/2006.03833
Autor:
Pintor, Maura, Demetrio, Luca, Sotgiu, Angelo, Melis, Marco, Demontis, Ambra, Biggio, Battista
Publikováno v:
SoftwareX 18 (2022)
We present \texttt{secml}, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning, including test-time evasion attacks to generate adversarial examples against deep n
Externí odkaz:
http://arxiv.org/abs/1912.10013
Autor:
Sotgiu, Angelo, Demontis, Ambra, Melis, Marco, Biggio, Battista, Fumera, Giorgio, Feng, Xiaoyi, Roli, Fabio
Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time. I
Externí odkaz:
http://arxiv.org/abs/1910.00470
Autor:
Pintor, Maura, Angioni, Daniele, Sotgiu, Angelo, Demetrio, Luca, Demontis, Ambra, Biggio, Battista, Roli, Fabio
Publikováno v:
In Pattern Recognition February 2023 134
The importance of employing machine learning for malware detection has become explicit to the security community. Several anti-malware vendors have claimed and advertised the application of machine learning in their products in which the inference ph
Externí odkaz:
http://arxiv.org/abs/1802.01185
Publikováno v:
In Neurocomputing 22 January 2022 470:257-268
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.