These models generate 3D content like objects, scenes, and textures. They enable creating 3D assets from text prompts and images. Key capabilities include:
We recommend TRELLIS as the best all-around model for 3D content generation. The community-uploaded model can create a detailed 3D asset from a single image in under a minute.
Don't have an image of the asset you want to create? Create one from a text prompt with our image generation models →
Recommended Models
For most use cases, firtoz/trellis is both fast and reliable. It can turn a single reference image into a complete 3D object in under a minute.
If you’re working with smaller assets or simple geometry, adirik/wonder3d and tencent/hunyuan3d-2mv also generate results quickly, especially when multiview control isn’t needed.
firtoz/trellis strikes the best overall balance — it’s accurate, fast, and easy to use from either an image or text-generated reference.
If you need finer geometry control or multi-angle consistency, tencent/hunyuan3d-2 and tencent/hunyuan3d-2mv deliver excellent results with slightly higher compute requirements.
firtoz/trellis is the go-to model for image-to-3D workflows. Upload a single image, and it reconstructs a 3D asset that captures shape and texture details accurately.
adirik/wonder3d also works well for turning images into full 3D objects but emphasizes realistic mesh generation over artistic interpretation.
If you’re starting from a prompt instead of an image, adirik/mvdream and cjwbw/shap-e can generate 3D assets directly from text.
They’re ideal for prototyping objects quickly or generating creative concepts for games and design projects.
You can combine all three processes to create production-ready 3D assets.
Most models output 3D meshes (like .obj or .glb files) or multi-view renders (a set of images showing the object from different angles).
Texturing models output texture maps or material layers that can be applied to your 3D meshes in external software.
Models like tencent/hunyuan3d-2 and cjwbw/shap-e are open source and can be self-hosted with Cog or Docker.
If you want to publish your own model on Replicate, create a replicate.yaml with defined inputs/outputs, push it to your account, and Replicate will handle serving and scaling.
Yes — most 3D generation models support commercial use, though it depends on the individual license.
Always review the License section on each model page, especially for non-commercial research models like adirik/text2tex.
Start with a clear reference image or a short text prompt describing your object.
For example:
Recommended Models


prunaai/hunyuan3d-2
hunyuan3d-2 optimised with the pruna toolkit: https://github.com/PrunaAI/pruna
Updated 6 months, 3 weeks ago
7.7K runs


tencent/hunyuan3d-2mv
Hunyuan3D-2mv is finetuned from Hunyuan3D-2 to support multiview controlled shape generation.
Updated 7 months, 4 weeks ago
8.4K runs


adirik/text2tex
[Non-commercial] Generate texture for 3D assets using text descriptions
Updated 1 year, 7 months ago
365 runs


adirik/wonder3d
Generates 3D assets from images
Updated 1 year, 8 months ago
3K runs


lucataco/deep3d
Deep3D: Real-Time end-to-end 2D-to-3D Video Conversion, based on deep learning
Updated 1 year, 9 months ago
545 runs


adirik/imagedream
Image-Prompt Multi-view Diffusion for 3D Generation
Updated 1 year, 9 months ago
1.5K runs


adirik/texture
Generate texture for your mesh with text prompts
Updated 1 year, 11 months ago
1.3K runs


adirik/mvdream
Generate 3D assets using text descriptions
Updated 2 years ago
1.1K runs


jd7h/zero123plusplus
Turn an image into a set of images from different 3D angles
Updated 2 years ago
11.1K runs


cjwbw/shap-e
Generating Conditional 3D Implicit Functions
Updated 2 years, 5 months ago
15.5K runs