Autor: |
Cheema, Usman, Ahmad, Mobeen, Han, Dongil, Moon, Seungbin |
Rok vydání: |
2021 |
Předmět: |
|
Druh dokumentu: |
Working Paper |
Popis: |
Visible-to-thermal face image matching is a challenging variate of cross-modality recognition. The challenge lies in the large modality gap and low correlation between visible and thermal modalities. Existing approaches employ image preprocessing, feature extraction, or common subspace projection, which are independent problems in themselves. In this paper, we propose an end-to-end framework for cross-modal face recognition. The proposed algorithm aims to learn identity-discriminative features from unprocessed facial images and identify cross-modal image pairs. A novel Unit-Class Loss is proposed for preserving identity information while discarding modality information. In addition, a Cross-Modality Discriminator block is proposed for integrating image-pair classification capability into the network. The proposed network can be used to extract modality-independent vector representations or a matching-pair classification for test images. Our cross-modality face recognition experiments on five independent databases demonstrate that the proposed method achieves marked improvement over existing state-of-the-art methods. |
Databáze: |
arXiv |
Externí odkaz: |
|