ZeroForge: Feedforward Text-to-Shape Without 3D Supervision
Autor: | Marshall, Kelly O., Pham, Minh, Joshi, Ameya, Jignasu, Anushrut, Balu, Aditya, Krishnamurthy, Adarsh, Hegde, Chinmay |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Current state-of-the-art methods for text-to-shape generation either require supervised training using a labeled dataset of pre-defined 3D shapes, or perform expensive inference-time optimization of implicit neural representations. In this work, we present ZeroForge, an approach for zero-shot text-to-shape generation that avoids both pitfalls. To achieve open-vocabulary shape generation, we require careful architectural adaptation of existing feed-forward approaches, as well as a combination of data-free CLIP-loss and contrastive losses to avoid mode collapse. Using these techniques, we are able to considerably expand the generative ability of existing feed-forward text-to-shape models such as CLIP-Forge. We support our method via extensive qualitative and quantitative evaluations Comment: 19 pages, High resolution figures needed to demonstrate 3D results |
Databáze: | arXiv |
Externí odkaz: |