ayushunleashed / partpacker

Part-level 3D object generation from single-view images

  • Public
  • 27 runs
  • GitHub
  • Weights
  • Paper

Run time and cost

This model runs on Nvidia A100 (80GB) GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

PartPacker

teaser

PartPacker performs efficient part-level 3D object generation from single-view images using dual volume packing. This Cog wrapper provides a convenient API for running the model on Replicate.

Original Paper: PartPacker: Efficient Part-level 3D Object Generation via Dual Volume Packing

Model Details

  • Architecture: Diffusion Transformer (DiT) with Flow Matching
  • Input: Single RGB image (518x518 processed)
  • Output: GLB file with part-separated 3D mesh
  • Part Generation: Dual volume packing for efficient part-level generation

Performance Tips

  1. Quality vs Speed:
  2. Lower num_steps (30-40) = faster generation
  3. Higher num_steps (70-100) = better quality

  4. Memory Management:

  5. Lower grid_resolution (256-320) = less memory usage
  6. Higher grid_resolution (448-512) = more detail

  7. Mesh Optimization:

  8. Enable simplify_mesh for smaller file sizes
  9. Adjust target_num_faces based on your needs

Common Issues

  1. Out of Memory: Reduce grid_resolution or use smaller input images
  2. Poor Quality: Increase num_steps or cfg_scale
  3. Large File Size: Enable simplify_mesh with lower target_num_faces

Input Image Tips

  • Use high-contrast objects with clear boundaries
  • Avoid cluttered backgrounds (auto-removal works best with simple backgrounds)
  • Center the object in the image
  • Use good lighting conditions

License

This Cog wrapper follows the same license as the original PartPacker project. See the original repository for license details.

Citation

If you use this model, please cite the original PartPacker paper:

@article{tang2024partpacker,
  title={Efficient Part-level 3D Object Generation via Dual Volume Packing},
  author={Tang, Jiaxiang and Lu, Ruijie and Li, Zhaoshuo and Hao, Zekun and Li, Xuan and Wei, Fangyin and Song, Shuran and Zeng, Gang and Liu, Ming-Yu and Lin, Tsung-Yi},
  journal={arXiv preprint arXiv:2506.09980},
  year={2025}
}