MEVG: Multi-event Video Generation with Text-to-Video Models

Autor: Oh, Gyeongrok, Jeong, Jaehwan, Kim, Sieun, Byeon, Wonmin, Kim, Jinkyu, Kim, Sungwoong, Kim, Sangpil
Rok vydání: 2023
Předmět:
Druh dokumentu: Working Paper
Popis: We introduce a novel diffusion-based video generation method, generating a video showing multiple events given multiple individual sentences from the user. Our method does not require a large-scale video dataset since our method uses a pre-trained diffusion-based text-to-video generative model without a fine-tuning process. Specifically, we propose a last frame-aware diffusion process to preserve visual coherence between consecutive videos where each video consists of different events by initializing the latent and simultaneously adjusting noise in the latent to enhance the motion dynamic in a generated video. Furthermore, we find that the iterative update of latent vectors by referring to all the preceding frames maintains the global appearance across the frames in a video clip. To handle dynamic text input for video generation, we utilize a novel prompt generator that transfers course text messages from the user into the multiple optimal prompts for the text-to-video diffusion model. Extensive experiments and user studies show that our proposed method is superior to other video-generative models in terms of temporal coherency of content and semantics. Video examples are available on our project page: https://kuai-lab.github.io/eccv2024mevg.
Comment: Accepted by ECCV 2024
Databáze: arXiv