Zobrazeno 1 - 10
of 21
pro vyhledávání: '"Carbone, Ginevra"'
Score-based and diffusion models have emerged as effective approaches for both conditional and unconditional generation. Still conditional generation is based on either a specific training of a conditional model or classifier guidance, which requires
Externí odkaz:
http://arxiv.org/abs/2308.16534
Autor:
Bortolussi, Luca, Carbone, Ginevra, Laurenti, Luca, Patane, Andrea, Sanguinetti, Guido, Wicker, Matthew
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, training deep learning models robust to adversarial at
Externí odkaz:
http://arxiv.org/abs/2207.06154
Parametric verification of linear temporal properties for stochastic models can be expressed as computing the satisfaction probability of a certain property as a function of the parameters of the model. Smoothed model checking (smMC) aims at inferrin
Externí odkaz:
http://arxiv.org/abs/2205.05398
Markov Population Models are a widespread formalism used to model the dynamics of complex systems, with applications in Systems Biology and many other fields. The associated Markov stochastic process in continuous time is often analyzed by simulation
Externí odkaz:
http://arxiv.org/abs/2106.12981
We consider the problem of the stability of saliency-based explanations of Neural Network predictions under adversarial attacks in a classification task. Saliency interpretations of deterministic Neural Networks are remarkably brittle even when the a
Externí odkaz:
http://arxiv.org/abs/2102.11010
We propose two training techniques for improving the robustness of Neural Networks to adversarial attacks, i.e. manipulations of the inputs that are maliciously crafted to fool networks into incorrect predictions. Both methods are independent of the
Externí odkaz:
http://arxiv.org/abs/2102.09230
Autor:
Bortolussi, Luca, Cairoli, Francesca, Carbone, Ginevra, Franchina, Francesco, Regolin, Enrico
We introduce a novel learning-based approach to synthesize safe and robust controllers for autonomous Cyber-Physical Systems and, at the same time, to generate challenging tests. This procedure combines formal methods for model verification with Gene
Externí odkaz:
http://arxiv.org/abs/2009.02019
Autor:
Carbone, Ginevra, Sarti, Gabriele
Publikováno v:
Italian Journal of Computational Linguistics (IJCoL) 6-2 (2020) 61-77
Plug-and-play language models (PPLMs) enable topic-conditioned natural language generation by pairing large pre-trained generators with attribute models used to steer the predicted token distribution towards the selected topic. Despite their computat
Externí odkaz:
http://arxiv.org/abs/2008.10875
Autor:
Carbone, Ginevra, Wicker, Matthew, Laurenti, Luca, Patane, Andrea, Bortolussi, Luca, Sanguinetti, Guido
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, the problem remains open. In this paper, we analyse th
Externí odkaz:
http://arxiv.org/abs/2002.04359
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.