Adversarial attacks on fingerprint liveness detection

Autor: Jianwei Fei, Zhihua Xia, Peipeng Yu, Fengjun Xiao
Jazyk: angličtina
Rok vydání: 2020
Předmět:
Zdroj: EURASIP Journal on Image and Video Processing, Vol 2020, Iss 1, Pp 1-11 (2020)
Druh dokumentu: article
ISSN: 1687-5281
DOI: 10.1186/s13640-020-0490-z
Popis: Abstract Deep neural networks are vulnerable to adversarial samples, posing potential threats to the applications deployed with deep learning models in practical conditions. A typical example is the fingerprint liveness detection module in fingerprint authentication systems. Inspired by great progress of deep learning, deep networks-based fingerprint liveness detection algorithms spring up and dominate the field. Thus, we investigate the feasibility of deceiving state-of-the-art deep networks-based fingerprint liveness detection schemes by leveraging this property in this paper. Extensive evaluations are made with three existing adversarial methods: FGSM, MI-FGSM, and Deepfool. We also proposed an adversarial attack method that enhances the robustness of adversarial fingerprint images to various transformations like rotations and flip. We demonstrate these outstanding schemes are likely to classify fake fingerprints as live fingerprints by adding tiny perturbations, even without internal details of their used model. The experimental results reveal a big loophole and threats for these schemes from a view of security, and enough attention is urgently needed to be paid on anti-adversarial not only in fingerprint liveness detection but also in all deep learning applications.
Databáze: Directory of Open Access Journals