Optimizing Spatiotemporal Feature Learning in 3D Convolutional Neural Networks With Pooling Blocks
Autor: | Hyun Kwang Shin, Muhammad Rafiq, Rockson Agyeman, Bernhard Rinner, Gyu Sang Choi |
---|---|
Rok vydání: | 2021 |
Předmět: |
General Computer Science
Computer science Feature extraction Pooling convolutional neural network 02 engineering and technology 010501 environmental sciences 01 natural sciences Convolutional neural network Action recognition Kernel (linear algebra) 0202 electrical engineering electronic engineering information engineering Feature (machine learning) General Materials Science 0105 earth and related environmental sciences Contextual image classification business.industry General Engineering Pattern recognition TK1-9971 Benchmark (computing) 020201 artificial intelligence & image processing Electrical engineering. Electronics. Nuclear engineering Artificial intelligence business optimization Feature learning |
Zdroj: | IEEE Access, Vol 9, Pp 70797-70805 (2021) |
ISSN: | 2169-3536 |
DOI: | 10.1109/access.2021.3078295 |
Popis: | Image data contain spatial information only, thus making two-dimensional (2D) Convolutional Neural Networks (CNN) ideal for solving image classification problems. On the other hand, video data contain both spatial and temporal information that must be simultaneously analyzed to solve action recognition problems. 3D CNNs are successfully used for these tasks, but they suffer from their extensive inherent parameter set. Increasing the network’s depth, as is common among 2D CNNs, and hence increasing the number of trainable parameters does not provide a good trade-off between accuracy and complexity of the 3D CNN. In this work, we propose Pooling Block (PB) as an enhanced pooling operation for optimizing action recognition by 3D CNNs. PB comprises three kernels of different sizes. The three kernels simultaneously sub-sample feature maps, and the outputs are concatenated into a single output vector. We compare our approach with three benchmark 3D CNNs (C3D, I3D, and Asymmetric 3D CNN) and three datasets (HMDB51, UCF101, and Kinetics 400). Our PB method yields significant improvement in 3D CNN performance with a comparatively small increase in the number of trainable parameters. We further investigate (1) the effect of video frame dimension and (2) the effect of the number of video frames on the performance of 3D CNNs using C3D as the benchmark. |
Databáze: | OpenAIRE |
Externí odkaz: |