Video Understanding with Large Language Models: A Survey

Autor: Tang, Yunlong, Bi, Jing, Xu, Siting, Song, Luchuan, Liang, Susan, Wang, Teng, Zhang, Daoan, An, Jie, Lin, Jingyang, Zhu, Rongyi, Vosoughi, Ali, Huang, Chao, Zhang, Zeliang, Liu, Pinxin, Feng, Mingqian, Zheng, Feng, Zhang, Jianguo, Luo, Ping, Luo, Jiebo, Xu, Chenliang
Rok vydání: 2023
Předmět:
Druh dokumentu: Working Paper
Popis: With the burgeoning growth of online video platforms and the escalating volume of video content, the demand for proficient video understanding tools has intensified markedly. Given the remarkable capabilities of large language models (LLMs) in language and multimodal tasks, this survey provides a detailed overview of recent advancements in video understanding that harness the power of LLMs (Vid-LLMs). The emergent capabilities of Vid-LLMs are surprisingly advanced, particularly their ability for open-ended multi-granularity (general, temporal, and spatiotemporal) reasoning combined with commonsense knowledge, suggesting a promising path for future video understanding. We examine the unique characteristics and capabilities of Vid-LLMs, categorizing the approaches into three main types: Video Analyzer x LLM, Video Embedder x LLM, and (Analyzer + Embedder) x LLM. Furthermore, we identify five sub-types based on the functions of LLMs in Vid-LLMs: LLM as Summarizer, LLM as Manager, LLM as Text Decoder, LLM as Regressor, and LLM as Hidden Layer. Furthermore, this survey presents a comprehensive study of the tasks, datasets, benchmarks, and evaluation methodologies for Vid-LLMs. Additionally, it explores the expansive applications of Vid-LLMs across various domains, highlighting their remarkable scalability and versatility in real-world video understanding challenges. Finally, it summarizes the limitations of existing Vid-LLMs and outlines directions for future research. For more information, readers are recommended to visit the repository at https://github.com/yunlong10/Awesome-LLMs-for-Video-Understanding.
Databáze: arXiv