https://nitter.net/camenduru/status/1731175297339277763?t=6kq5yMq71y30An2iu1cxLA#m
Thanks to Baorui Ma ❤ Haoge Deng ❤ Junsheng Zhou ❤ Yu-Shen Liu ❤ Tiejun Huang ❤ Xinlong Wang ❤
🌐page: https://mabaorui.github.io/GeoDream_page
📄paper: arxiv.org/abs/2311.17971
🧬code: https://github.com/baaivision/GeoDream
🦒colab by modelslab.com: please try it 🐣 https://github.com/camenduru/GeoDream-colab
Abstract
Text-to-3D generation by distilling pretrained large-scale text-to-image diffusion models has shown great promise but still suffers from inconsistent 3D geometric structures (Janus problems) and severe artifacts. The aforementioned problems mainly stem from 2D diffusion models lacking 3D awareness during the lifting. In this work, we present GeoDream, a novel method that incorporates explicit generalized 3D priors with 2D diffusion priors to enhance the capability of obtaining unambiguous 3D consistent geometric structures without sacrificing diversity or fidelity. Specifically, we first utilize a multi-view diffusion model to generate posed images and then construct cost volume from the predicted image, which serves as native 3D geometric priors, ensuring spatial consistency in 3D space. Subsequently, we further propose to harness 3D geometric priors to unlock the great potential of 3D awareness in 2D diffusion priors via a disentangled design. Notably, disentangling 2D and 3D priors allows us to refine 3D geometric priors further. We justify that the refined 3D geometric priors aid in the 3D-aware capability of 2D diffusion priors, which in turn provides superior guidance for the refinement of 3D geometric priors. Our numerical and visual comparisons demonstrate that GeoDream generates more 3D consistent textured meshes with high-resolution realistic renderings (i.e., 1024 × 1024) and adheres more closely to semantic coherence.
Removed by mod