Deep transformation learning for face recognition in the unconstrained scene
Autor: | Jinkun Zhang, Yanqing Shao, Guanhao Chen, Zhuoyi Jin, Chaowei Tang |
---|---|
Rok vydání: | 2018 |
Předmět: |
business.industry
Computer science Pattern recognition 02 engineering and technology 010501 environmental sciences 01 natural sciences Facial recognition system Computer Science Applications Transformation (function) Discriminative model Hardware and Architecture Feature (computer vision) Face (geometry) Pattern recognition (psychology) Softmax function 0202 electrical engineering electronic engineering information engineering Benchmark (computing) 020201 artificial intelligence & image processing Computer Vision and Pattern Recognition Artificial intelligence business Software 0105 earth and related environmental sciences |
Zdroj: | Machine Vision and Applications. 29:513-523 |
ISSN: | 1432-1769 0932-8092 |
Popis: | Because human pose variations cannot be controlled in unconstrained scene, it is frequently hard to capture frontal face image. This is why either face recognition rate is low, or face image cannot be recognized at all. To tackle the problem, this paper proposes deep transformation learning to extract the pose-robust feature within one model; it includes feature transformation and joint supervision of softmax loss and pose loss. Specifically, the feature transformation is designed to learn the transformation from different poses. The pose loss is designed to simultaneously learn the feature center of different poses and keep intra-pose relationships. The extracted deep features tend to be more pose-robust and discriminative. Experimental results also confirm the performances to be valid on several important face recognition benchmarks, including Labeled Faces in the Wild, IARPA Janus Benchmark A. |
Databáze: | OpenAIRE |
Externí odkaz: | |
Nepřihlášeným uživatelům se plný text nezobrazuje | K zobrazení výsledku je třeba se přihlásit. |