Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Pautov, Mikhail"'
The vulnerability of artificial neural networks to adversarial perturbations in the black-box setting is widely studied in the literature. The majority of attack methods to construct these perturbations suffer from an impractically large number of qu
Externí odkaz:
http://arxiv.org/abs/2410.15889
While Deep Neural Networks (DNNs) have demonstrated remarkable performance in tasks related to perception and control, there are still several unresolved concerns regarding the privacy of their training data, particularly in the context of vulnerabil
Externí odkaz:
http://arxiv.org/abs/2405.07562
Speaker recognition technology is applied in various tasks ranging from personal virtual assistants to secure access systems. However, the robustness of these systems against adversarial attacks, particularly to additive perturbations, remains a sign
Externí odkaz:
http://arxiv.org/abs/2404.18791
As deep learning (DL) models are widely and effectively used in Machine Learning as a Service (MLaaS) platforms, there is a rapidly growing interest in DL watermarking techniques that can be used to confirm the ownership of a particular model. Unfort
Externí odkaz:
http://arxiv.org/abs/2401.08261
Randomized smoothing is the state-of-the-art approach to construct image classifiers that are provably robust against additive adversarial perturbations of bounded magnitude. However, it is more complicated to construct reasonable certificates agains
Externí odkaz:
http://arxiv.org/abs/2309.16710
Neural networks are deployed widely in natural language processing tasks on the industrial scale, and perhaps the most often they are used as compounds of automatic machine translation systems. In this work, we present a simple approach to fool state
Externí odkaz:
http://arxiv.org/abs/2303.10974
Autor:
Pautov, Mikhail, Kuznetsova, Olesya, Tursynbek, Nurislam, Petiushko, Aleksandr, Oseledets, Ivan
Publikováno v:
Advances in Neural Information Processing Systems 35 (NeurIPS 2022)
Randomized smoothing is considered to be the state-of-the-art provable defense against adversarial perturbations. However, it heavily exploits the fact that classifiers map input objects to class probabilities and do not focus on the ones that learn
Externí odkaz:
http://arxiv.org/abs/2202.01186
Autor:
Pautov, Mikhail, Tursynbek, Nurislam, Munkhoeva, Marina, Muravev, Nikita, Petiushko, Aleksandr, Oseledets, Ivan
Publikováno v:
36 AAAI Conference on Artificial Intelligence AAAI-2022
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks -- small modifications of the input that change the predictions. Besides rigorously studied $\ell_p$-bounded additive perturbations, recently
Externí odkaz:
http://arxiv.org/abs/2109.10696
Publikováno v:
2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON)
Recent works showed the vulnerability of image classifiers to adversarial attacks in the digital domain. However, the majority of attacks involve adding small perturbation to an image to fool the classifier. Unfortunately, such procedures can not be
Externí odkaz:
http://arxiv.org/abs/1910.07067
Publikováno v:
2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON)
Recent studies proved that deep learning approaches achieve remarkable results on face detection task. On the other hand, the advances gave rise to a new problem associated with the security of the deep convolutional neural network models unveiling p
Externí odkaz:
http://arxiv.org/abs/1910.06261