Readme
Dreamcraft3D is a text and image to 3D model. The model builds upon 3D framework threestudio. It consists of a 4-step pipeline that builds upon Stable Zero123 and Deepfloyd IF.
- Original model: https://github.com/deepseek-ai/DreamCraft3D
- Threestudio adaptation: https://github.com/DSaurus/threestudio-dreamcraft3D
- Threestudio: https://github.com/threestudio-project/threestudio/
- Cog implementation: https://github.com/datakami-models/cog-dreamcraft3d/
Pipeline
- Step 1: NeuS
- Step 2: NeRF
- Step 3: Geometry refinement
- Step 4: Texture refinement
Underlying models
This pipeline builds upon Deepfloyd IF, Stable Zero123 and Stable Diffusion 2.1 base for guidance. Please make sure you read and abide to the relevant licenses before using it.
- DeepFloyd IF: https://github.com/deep-floyd/IF
- Stable Zero123: https://huggingface.co/stabilityai/stable-zero123
- Stable Diffusion 2.1: https://huggingface.co/stabilityai/stable-diffusion-2-1
Abstract
We present DreamCraft3D, a hierarchical 3D content generation method that produces high-fidelity and coherent 3D objects. We tackle the problem by leveraging a 2D reference image to guide the stages of geometry sculpting and texture boosting. A central focus of this work is to address the consistency issue that existing works encounter. To sculpt geometries that render coherently, we perform score distillation sampling via a view-dependent diffusion model. This 3D prior, alongside several training strategies, prioritizes the geometry consistency but compromises the texture fidelity. We further propose Bootstrapped Score Distillation to specifically boost the texture. We train a personalized diffusion model, Dreambooth, on the augmented renderings of the scene, imbuing it with 3D knowledge of the scene being optimized. The score distillation from this 3D-aware diffusion prior provides view-consistent guidance for the scene. Notably, through an alternating optimization of the diffusion prior and 3D scene representation, we achieve mutually reinforcing improvements: the optimized 3D scene aids in training the scene-specific diffusion model, which offers increasingly view-consistent guidance for 3D optimization. The optimization is thus bootstrapped and leads to substantial texture boosting. With tailored 3D priors throughout the hierarchical generation, DreamCraft3D generates coherent 3D objects with photorealistic renderings, advancing the state-of-the-art in 3D content generation.
Paper
@article{sun2023dreamcraft3d,
title={Dreamcraft3d: Hierarchical 3d generation with bootstrapped diffusion prior},
author={Sun, Jingxiang and Zhang, Bo and Shao, Ruizhi and Wang, Lizhen and Liu, Wen and Xie, Zhenda and Liu, Yebin},
journal={arXiv preprint arXiv:2310.16818},
year={2023}
}