Readme
MVDream
MVDream is a text-to-3D model that can generate 3D assets via score distillation on self-generated multi-views. See the original repository and paper for details. Note that this model is intended to generate singular objects and can not generate scenes.
How to use the API
To use MVDream, simply enter a text description of the 3D asset you would like to generate. Training / generating a 3D asset takes about 55-60 minutes. The API input arguments are as follows:
- prompt: text prompt to generate 3D asset from.
- negative_prompt: text prompt to describe attributes or features you don’t want in your 3D asset.
- num_steps: number of training steps. Strongly advised to keep the default value for optimal results.
- seed: seed for reproducibility, default value is None. Set to an arbitrary value for deterministic generation.
References
@article{shi2023MVDream,
author = {Shi, Yichun and Wang, Peng and Ye, Jianglong and Mai, Long and Li, Kejie and Yang, Xiao},
title = {MVDream: Multi-view Diffusion for 3D Generation},
journal = {arXiv:2308.16512},
year = {2023},
}