Autor: |
Song, Hyoung-Kyu, AlAlkeem, Ebrahim, Yun, Jaewoong, Kim, Tae-Ho, Yoo, Hyerin, Heo, Dasom, Chae, Myungsu, Yeob Yeun, Chan |
Předmět: |
|
Zdroj: |
BMC Bioinformatics; 7/16/2020, Vol. 21 Issue 1, p1-11, 11p, 2 Black and White Photographs, 2 Diagrams, 3 Charts, 1 Graph |
Abstrakt: |
Background: Recognition is an essential function of human beings. Humans easily recognize a person using various inputs such as voice, face, or gesture. In this study, we mainly focus on DL model with multi-modality which has many benefits including noise reduction. We used ResNet-50 for extracting features from dataset with 2D data. Results: This study proposes a novel multimodal and multitask model, which can both identify human ID and classify the gender in single step. At the feature level, the extracted features are concatenated as the input for the identification module. Additionally, in our model design, we can change the number of modalities used in a single model. To demonstrate our model, we generate 58 virtual subjects with public ECG, face and fingerprint dataset. Through the test with noisy input, using multimodal is more robust and better than using single modality. Conclusions: This paper presents an end-to-end approach for multimodal and multitask learning. The proposed model shows robustness on the spoof attack, which can be significant for bio-authentication device. Through results in this study, we suggest a new perspective for human identification task, which performs better than in previous approaches. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|