Continuous Video Process: Modeling Videos as Continuous Multi-Dimensional Processes for Video Prediction
Autor: | Shrivastava, Gaurav, Shrivastava, Abhinav |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Diffusion models have made significant strides in image generation, mastering tasks such as unconditional image synthesis, text-image translation, and image-to-image conversions. However, their capability falls short in the realm of video prediction, mainly because they treat videos as a collection of independent images, relying on external constraints such as temporal attention mechanisms to enforce temporal coherence. In our paper, we introduce a novel model class, that treats video as a continuous multi-dimensional process rather than a series of discrete frames. We also report a reduction of 75\% sampling steps required to sample a new frame thus making our framework more efficient during the inference time. Through extensive experimentation, we establish state-of-the-art performance in video prediction, validated on benchmark datasets including KTH, BAIR, Human3.6M, and UCF101. Navigate to the project page https://www.cs.umd.edu/~gauravsh/cvp/supp/website.html for video results.} Comment: Navigate to the project page https://www.cs.umd.edu/~gauravsh/cvp/supp/website.html for video results. Extended version of published CVPR paper |
Databáze: | arXiv |
Externí odkaz: |