HARNet in deep learning approach—a systematic survey

Autor: Neelam Sanjeev Kumar, G. Deepika, V. Goutham, B. Buvaneswari, R. Vijaya Kumar Reddy, Sanjeevkumar Angadi, C. Dhanamjayulu, Ravikumar Chinthaginjala, Faruq Mohammad, Baseem Khan
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: Scientific Reports, Vol 14, Iss 1, Pp 1-15 (2024)
Druh dokumentu: article
ISSN: 2045-2322
DOI: 10.1038/s41598-024-58074-y
Popis: Abstract A comprehensive examination of human action recognition (HAR) methodologies situated at the convergence of deep learning and computer vision is the subject of this article. We examine the progression from handcrafted feature-based approaches to end-to-end learning, with a particular focus on the significance of large-scale datasets. By classifying research paradigms, such as temporal modelling and spatial features, our proposed taxonomy illuminates the merits and drawbacks of each. We specifically present HARNet, an architecture for Multi-Model Deep Learning that integrates recurrent and convolutional neural networks while utilizing attention mechanisms to improve accuracy and robustness. The VideoMAE v2 method ( https://github.com/OpenGVLab/VideoMAEv2 ) has been utilized as a case study to illustrate practical implementations and obstacles. For researchers and practitioners interested in gaining a comprehensive understanding of the most recent advancements in HAR as they relate to computer vision and deep learning, this survey is an invaluable resource.
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje