Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Dyrmishi, Salijona"'
Deep Generative Models (DGMs) have found application in computer vision for generating adversarial examples to test the robustness of machine learning (ML) systems. Extending these adversarial techniques to tabular ML presents unique challenges due t
Externí odkaz:
http://arxiv.org/abs/2409.12642
Autor:
Stoian, Mihaela Cătălina, Dyrmishi, Salijona, Cordy, Maxime, Lukasiewicz, Thomas, Giunchiglia, Eleonora
Deep Generative Models (DGMs) have been shown to be powerful tools for generating tabular data, as they have been increasingly able to capture the complex distributions that characterize them. However, to generate realistic synthetic data, it is ofte
Externí odkaz:
http://arxiv.org/abs/2402.04823
Natural Language Processing (NLP) models based on Machine Learning (ML) are susceptible to adversarial attacks -- malicious algorithms that imperceptibly modify input text to force models into making incorrect predictions. However, evaluations of the
Externí odkaz:
http://arxiv.org/abs/2305.15587
While the literature on security attacks and defense of Machine Learning (ML) systems mostly focuses on unrealistic adversarial examples, recent research has raised concern about the under-explored field of realistic adversarial attacks and their imp
Externí odkaz:
http://arxiv.org/abs/2202.03277
The generation of feasible adversarial examples is necessary for properly assessing models that work in constrained feature space. However, it remains a challenging task to enforce constraints into attacks that were designed for computer vision. We p
Externí odkaz:
http://arxiv.org/abs/2112.01156