Robust single-sample face recognition by sparsity-driven sub-dictionary learning using deep features
Autor: | Cuculo, Vittorio, Alessandro, D'Amelio, Giuliano, Grossi, Raffaella, Lanzarotti, Jianyi, Lin |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
sparse recovery
Automated Databases Factual Image Processing single sample per person ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Pattern Recognition lcsh:Chemical technology Settore INF/01 - INFORMATICA Article Pattern Recognition Automated Databases Deep Learning Computer-Assisted Image Processing Computer-Assisted Single sample per person Humans lcsh:TP1-1185 Face recognition Optimal directions (MOD) Factual Deep Convolutional Neural Network (DCNN) features face recognition dictionary learning optimal directions (MOD) Dictionary learning Settore ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI Sparse recovery Biometric Identification Deep convolutional neural network (DCNN) features Facial Recognition Algorithms |
Zdroj: | Sensors Volume 19 Issue 1 Sensors (Basel, Switzerland) Sensors, Vol 19, Iss 1, p 146 (2019) |
Popis: | Face recognition using a single reference image per subject is challenging, above all when referring to a large gallery of subjects. Furthermore, the problem hardness seriously increases when the images are acquired in unconstrained conditions. In this paper we address the challenging Single Sample Per Person (SSPP) problem considering large datasets of images acquired in the wild, thus possibly featuring illumination, pose, face expression, partial occlusions, and low-resolution hurdles. The proposed technique alternates a sparse dictionary learning technique based on the method of optimal direction and the iterative ℓ 0 -norm minimization algorithm called k-LiMapS. It works on robust deep-learned features, provided that the image variability is extended by standard augmentation techniques. Experiments show the effectiveness of our method against the hardness introduced above: first, we report extensive experiments on the unconstrained LFW dataset when referring to large galleries up to 1680 subjects second, we present experiments on very low-resolution test images up to 8 × 8 pixels third, tests on the AR dataset are analyzed against specific disguises such as partial occlusions, facial expressions, and illumination problems. In all the three scenarios our method outperforms the state-of-the-art approaches adopting similar configurations. |
Databáze: | OpenAIRE |
Externí odkaz: |