adirik / texture

Generate texture for your mesh with text prompts

  • Public
  • 1K runs
  • GitHub
  • Paper
  • License

Input

Output

Run time and cost

This model runs on Nvidia A40 GPU hardware. Predictions typically complete within 7 minutes.

Readme

TEXTure

TEXTure is a text+mesh conditioned texture generation model. See the original repository and paper for details.

How to use the API

To use TEXTure, simply upload your 3D object file in .obj or .off format and enter a text description of the texture you would like to generate. The output of the API is a textured mesh and a rendered video of the output. The API input arguments are as follows:

  • prompt: text prompt to generate texture from.
  • shape_path: path to the 3D file (.obj or .off) you would like to generate texture for.
  • shape_scale: factor to scale image by.
  • guidance_scale: guidance_scale, higher values yield textures closer to the input text.
  • texture_resolution: resolution of the texture to be generated.
  • texture_interpolation_mode: texture mapping interpolation mode from texture image, options: ‘nearest’, ‘bilinear’, ‘bicubic’.
  • seed: seed for reproducibility, default value is None. Set to an arbitrary value for deterministic generation.

References

@article{Richardson2023TEXTureTT,
  title={TEXTure: Text-Guided Texturing of 3D Shapes},
  author={Elad Richardson and Gal Metzer and Yuval Alaluf and Raja Giryes and Daniel Cohen-Or},
  journal={ACM SIGGRAPH 2023 Conference Proceedings},
  year={2023},
  url={https://api.semanticscholar.org/CorpusID:256597953}
}