Improving the Performance of Unimodal Dynamic Hand-Gesture Recognition With Multimodal Training
Autor: | Vishal M. Patel, Mahdi Abavisani, Hamid Reza Vaezi Joze |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning I.5.3 I.2.10 Computer science Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition Computer Science - Human-Computer Interaction Machine Learning (stat.ML) 02 engineering and technology Convolutional neural network Human-Computer Interaction (cs.HC) Machine Learning (cs.LG) Statistics - Machine Learning 0202 electrical engineering electronic engineering information engineering Regularization (linguistics) Modality (human–computer interaction) business.industry 68T45 62H30 020206 networking & telecommunications Gesture recognition Face (geometry) 020201 artificial intelligence & image processing Artificial intelligence business Gesture |
Zdroj: | CVPR |
DOI: | 10.1109/cvpr.2019.00126 |
Popis: | We present an efficient approach for leveraging the knowledge from multiple modalities in training unimodal 3D convolutional neural networks (3D-CNNs) for the task of dynamic hand gesture recognition. Instead of explicitly combining multimodal information, which is commonplace in many state-of-the-art methods, we propose a different framework in which we embed the knowledge of multiple modalities in individual networks so that each unimodal network can achieve an improved performance. In particular, we dedicate separate networks per available modality and enforce them to collaborate and learn to develop networks with common semantics and better representations. We introduce a "spatiotemporal semantic alignment" loss (SSA) to align the content of the features from different networks. In addition, we regularize this loss with our proposed "focal regularization parameter" to avoid negative knowledge transfer. Experimental results show that our framework improves the test time recognition accuracy of unimodal networks, and provides the state-of-the-art performance on various dynamic hand gesture recognition datasets. |
Databáze: | OpenAIRE |
Externí odkaz: |