Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Kabaha, Anan"'
Autor:
Kabaha, Anan, Drachsler-Cohen, Dana
Neural networks are successful in various applications but are also susceptible to adversarial attacks. To show the safety of network classifiers, many verifiers have been introduced to reason about the local robustness of a given input to a given pe
Externí odkaz:
http://arxiv.org/abs/2402.19322
Neural networks are susceptible to privacy attacks. To date, no verifier can reason about the privacy of individuals participating in the training set. We propose a new privacy property, called local differential classification privacy (LDCP), extend
Externí odkaz:
http://arxiv.org/abs/2310.20299
Autor:
Kabaha, Anan, Drachsler-Cohen, Dana
Deep neural networks have been shown to be vulnerable to adversarial attacks that perturb inputs based on semantic features. Existing robustness analyzers can reason about semantic feature neighborhoods to increase the networks' reliability. However,
Externí odkaz:
http://arxiv.org/abs/2209.05446