MVDream: Multi-view Diffusion for 3D Generation
Autor: | Shi, Yichun, Wang, Peng, Ye, Jianglong, Long, Mai, Li, Kejie, Yang, Xiao |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | We introduce MVDream, a diffusion model that is able to generate consistent multi-view images from a given text prompt. Learning from both 2D and 3D data, a multi-view diffusion model can achieve the generalizability of 2D diffusion models and the consistency of 3D renderings. We demonstrate that such a multi-view diffusion model is implicitly a generalizable 3D prior agnostic to 3D representations. It can be applied to 3D generation via Score Distillation Sampling, significantly enhancing the consistency and stability of existing 2D-lifting methods. It can also learn new concepts from a few 2D examples, akin to DreamBooth, but for 3D generation. Comment: Reorganized for arXiv; Our project page is https://MV-Dream.github.io |
Databáze: | arXiv |
Externí odkaz: |