Autor: |
Yu-Xin Yang, Chang Wen, Kai Xie, Fang-Qing Wen, Guan-Qun Sheng, Xin-Gong Tang |
Jazyk: |
angličtina |
Rok vydání: |
2018 |
Předmět: |
|
Zdroj: |
Sensors, Vol 18, Iss 12, p 4237 (2018) |
Druh dokumentu: |
article |
ISSN: |
1424-8220 |
DOI: |
10.3390/s18124237 |
Popis: |
In order to solve the problem of face recognition in complex environments being vulnerable to illumination change, object rotation, occlusion, and so on, which leads to the imprecision of target position, a face recognition algorithm with multi-feature fusion is proposed. This study presents a new robust face-matching method named SR-CNN, combining the rotation-invariant texture feature (RITF) vector, the scale-invariant feature transform (SIFT) vector, and the convolution neural network (CNN). Furthermore, a graphics processing unit (GPU) is used to parallelize the model for an optimal computational performance. The Labeled Faces in the Wild (LFW) database and self-collection face database were selected for experiments. It turns out that the true positive rate is improved by 10.97⁻13.24% and the acceleration ratio (the ratio between central processing unit (CPU) operation time and GPU time) is 5⁻6 times for the LFW face database. For the self-collection, the true positive rate increased by 12.65⁻15.31%, and the acceleration ratio improved by a factor of 6⁻7. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|