Multimodal Score Fusion with Sparse Low-rank Bilinear Pooling for Egocentric Hand Action Recognition.

Autor: Roy, Kankana
Předmět:
Zdroj: ACM Transactions on Multimedia Computing, Communications & Applications; Jul2024, Vol. 20 Issue 7, p1-22, 22p
Abstrakt: With the advent of egocentric cameras, there are new challenges where traditional computer vision is not sufficient to handle this kind of video. Moreover, egocentric cameras often offer multiple modalities that need to be modeled jointly to exploit complimentary information. In this article, we propose a sparse low-rank bilinear score pooling approach for egocentric hand action recognition from RGB-D videos. It consists of five blocks: a baseline CNN to encode RGB and depth information for producing classification probabilities; a novel bilinear score pooling block to generate a score matrix; a sparse low-rank matrix recovery block to reduce redundant features, which is common in bilinear pooling; a one-layer CNN for frame-level classification; and an RNN for video-level classification. We proposed to fuse classification probabilities instead of traditional CNN features from RGB and depth modality, involving an effective yet simple sparse low-rank bilinear score pooling to produce a fused RGB-D score matrix. To demonstrate the efficacy of our method, we perform extensive experiments over two large-scale hand action datasets, namely, THU-READ and FPHA, and two smaller datasets, GUN-71 and HAD. We observe that the proposed method outperforms state-of-the-art methods and achieves accuracies of 78.55% and 96.87% over the THU-READ dataset in cross-subject and cross-group settings, respectively. Further, we achieved accuracies of 91.59% and 43.87% over the FPHA and Gun-71 datasets, respectively. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index