Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Kazuya Kakizaki"'
Publikováno v:
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
It is well-known that the most existing machine learning (ML)-based safety-critical applications are vulnerable to carefully crafted input instances called adversarial examples (AXs). An adversary can conveniently attack these target systems from dig
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::87325d406ceea5e00975bc98971fc580
http://arxiv.org/abs/2203.15498
http://arxiv.org/abs/2203.15498
We assess the vulnerabilities of deep face recognition systems for images that falsify/spoof multiple identities simultaneously. We demonstrate that, by manipulating the deep feature representation extracted from a face image via imperceptibly small
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::977543de03549a8eef53a5be985a3c96
http://arxiv.org/abs/2110.00708
http://arxiv.org/abs/2110.00708
Publikováno v:
BIOSIG
DNN-based face verification systems are vulnerable to adversarial examples. The previous paper’s evaluation protocol (scenario), which we called the probe-dependent attack scenario, was unrealistic. We define a more practical attack scenario, the p