ddvinh1/text2image-1.3b

Public
10 runs

Run ddvinh1/text2image-1.3b with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
Text prompt for image generation
negative_prompt
string
Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards
Negative prompt to avoid unwanted elements
width
integer
1024

Min: 480

Max: 1280

Width of the output image
height
integer
1024

Min: 480

Max: 1280

Height of the output image
num_inference_steps
integer
30

Min: 1

Max: 80

Number of inference steps. More steps = higher quality but slower
guidance_scale
number
1

Max: 10

Guidance scale for classifier-free guidance
lora_id
string
Optional: Hugging Face LoRA ID for custom styling (e.g., 'username/lora-name')

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "string",
  "title": "Output",
  "format": "uri"
}