Action Recognition with Fusion of Multiple Graph Convolutional Networks
Autor: | Maurice, Camille, Lerasle, Frédéric |
---|---|
Přispěvatelé: | Équipe Robotique, Action et Perception (LAAS-RAP), Laboratoire d'analyse et d'architecture des systèmes (LAAS), Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT)-Université de Toulouse (UT)-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Université de Toulouse (UT)-Institut National des Sciences Appliquées (INSA)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université de Toulouse (UT)-Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT) |
Rok vydání: | 2021 |
Předmět: | |
Zdroj: | 2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) 2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Nov 2021, Washington, DC, United States. ⟨10.1109/AVSS52988.2021.9663765⟩ |
DOI: | 10.1109/avss52988.2021.9663765 |
Popis: | International audience; We propose two lightweight and specialized Spatio-Temporal Graph Convolutional Networks (ST-GCNs): one for actions characterized by the motion of the human body and a novel one we especially design to recognize particular objects configurations during human actions execution. We propose a late-fusion strategy of the predictions of both graphs networks to get the most out of the two and to clear out ambiguities in the action classification. This modular approach enables us to reduce memory cost and training times. Moreover we also propose the same late fusion mechanism to further improve the performance using a Bayesian approach. We show results on 2 public datasets: CAD-120 and Watch-n-Patch. Our late-fusion mechanism yields performance gains in accuracy of respectively +21 percentage points (pp), +7 pp on Watch-n-Patch and CAD-120 compared to the individual graphs. Our approach outperforms most of the significant existing approaches. |
Databáze: | OpenAIRE |
Externí odkaz: |