Zobrazeno 1 - 10
of 27
pro vyhledávání: '"Ambra Demontis"'
Publikováno v:
SoftwareX, Vol 18, Iss , Pp 101095- (2022)
We present secml, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning, including test-time evasion attacks to generate adversarial examples against deep neural net
Externí odkaz:
https://doaj.org/article/565392ef62384406abeb1a89a7961bd8
Autor:
Angelo Sotgiu, Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Xiaoyi Feng, Fabio Roli
Publikováno v:
EURASIP Journal on Information Security, Vol 2020, Iss 1, Pp 1-10 (2020)
Abstract Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at tes
Externí odkaz:
https://doaj.org/article/26aa91538cec41a1898b25e1e12d9db9
Autor:
Yang Zheng, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Maura Pintor, Battista Biggio, Fabio Roli
Publikováno v:
Information Sciences. 632:130-143
Adversarial reprogramming allows repurposing a machine-learning model to perform a different task. For example, a model trained to recognize animals can be reprogrammed to recognize digits by embedding an adversarial program in the digit images provi
A Hybrid Training-Time and Run-Time Defense Against Adversarial Attacks in Modulation Classification
Publikováno v:
IEEE Wireless Communications Letters. 11:1161-1165
Autor:
Xinglong Chang, Katharina Dost, Kaiqi Zhao, Ambra Demontis, Fabio Roli, Gillian Dobbie, Jörg Wicker
Publikováno v:
Advances in Knowledge Discovery and Data Mining ISBN: 9783031333736
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::528e2fe24ce72666f6ab0a406a9ae67d
https://doi.org/10.1007/978-3-031-33374-3_1
https://doi.org/10.1007/978-3-031-33374-3_1
Publikováno v:
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security.
Autor:
Fabio Roli, Marco Melis, Ambra Demontis, Angelo Sotgiu, Giorgio Fumera, Xiaoyi Feng, Battista Biggio
Publikováno v:
EURASIP Journal on Information Security, Vol 2020, Iss 1, Pp 1-10 (2020)
Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time. I
Autor:
Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, Fabio Roli
Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machine-learning model to misclassify it. However, their optimization is computationally demanding, and requires careful hyperparameter tuning, potentially leadi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::ed871e8ab76277d748b3abeb1cb0bc67
Autor:
Yang Zheng, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Maura Pintor, Ambra Demontis, Battista Biggio, Fabio Roli
Adversarial reprogramming allows stealing computational resources by repurposing machine learning models to perform a different task chosen by the attacker. For example, a model trained to recognize images of animals can be reprogrammed to recognize
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7f3d5a2b874eca08cd15a6185a3447b8
Autor:
Yisroel Mirsky, Ambra Demontis, Jaidip Kotak, Ram Shankar, Deng Gelei, Liu Yang, Xiangyu Zhang, Maura Pintor, Wenke Lee, Yuval Elovici, Battista Biggio
Publikováno v:
Computers & Security. 124:103006