DirectorLLM for Human-Centric Video Generation

Autor: Song, Kunpeng, Hou, Tingbo, He, Zecheng, Ma, Haoyu, Wang, Jialiang, Sinha, Animesh, Tsai, Sam, Luo, Yaqiao, Dai, Xiaoliang, Chen, Li, Xia, Xide, Zhang, Peizhao, Vajda, Peter, Elgammal, Ahmed, Juefei-Xu, Felix
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: In this paper, we introduce DirectorLLM, a novel video generation model that employs a large language model (LLM) to orchestrate human poses within videos. As foundational text-to-video models rapidly evolve, the demand for high-quality human motion and interaction grows. To address this need and enhance the authenticity of human motions, we extend the LLM from a text generator to a video director and human motion simulator. Utilizing open-source resources from Llama 3, we train the DirectorLLM to generate detailed instructional signals, such as human poses, to guide video generation. This approach offloads the simulation of human motion from the video generator to the LLM, effectively creating informative outlines for human-centric scenes. These signals are used as conditions by the video renderer, facilitating more realistic and prompt-following video generation. As an independent LLM module, it can be applied to different video renderers, including UNet and DiT, with minimal effort. Experiments on automatic evaluation benchmarks and human evaluations show that our model outperforms existing ones in generating videos with higher human motion fidelity, improved prompt faithfulness, and enhanced rendered subject naturalness.
Databáze: arXiv