Video Transformers: A Survey
Autor: | Javier Selva, Anders S. Johansen, Sergio Escalera, Kamal Nasrollahi, Thomas B. Moeslund, Albert Clapes |
---|---|
Rok vydání: | 2022 |
Předmět: |
Market research
self-attention FOS: Computer and information sciences Artificial intelligence Applied Mathematics Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition Data models transformers video representations computer vision Tokenization Computational Theory and Mathematics Artificial Intelligence Current transformers Task analysis Training Computer Vision and Pattern Recognition Software Visualization |
Zdroj: | Selva, J, Johansen, A S, Escalera, S, Nasrollahi, K, Moeslund, T B & Clapes, A 2023, ' Video Transformers : A Survey ', IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-20 . https://doi.org/10.1109/TPAMI.2023.3243465 |
DOI: | 10.48550/arxiv.2201.05991 |
Popis: | Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However, they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced by the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey, we analyze the main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled at the input level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition, we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity. |
Databáze: | OpenAIRE |
Externí odkaz: |