You're looking at a specific version of this model. Jump to the model overview.
dribnet /dollarpeople:abe86f1f
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
|
|
image |
string
|
Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
|
|
mask |
string
|
Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
|
|
aspect_ratio |
string
(enum)
|
1:1
Options: 1:1, 16:9, 21:9, 3:2, 2:3, 4:5, 5:4, 3:4, 4:3, 9:16, 9:21, custom |
Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
|
height |
integer
|
Min: 256 Max: 1440 |
Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
|
width |
integer
|
Min: 256 Max: 1440 |
Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
|
prompt_strength |
number
|
0.8
Max: 1 |
Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image
|
model |
string
(enum)
|
dev
Options: dev, schnell |
Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
|
num_outputs |
integer
|
1
Min: 1 Max: 4 |
Number of outputs to generate
|
num_inference_steps |
integer
|
28
Min: 1 Max: 50 |
Number of denoising steps. More steps can give more detailed images, but take longer.
|
guidance_scale |
number
|
3
Max: 10 |
Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
|
seed |
integer
|
Random seed. Set for reproducible generation
|
|
output_format |
string
(enum)
|
webp
Options: webp, jpg, png |
Format of the output images
|
output_quality |
integer
|
80
Max: 100 |
Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
|
disable_safety_checker |
boolean
|
False
|
Disable safety checker for generated images.
|
go_fast |
boolean
|
False
|
Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
|
megapixels |
string
(enum)
|
1
Options: 1, 0.25 |
Approximate number of megapixels for generated image
|
lora_scale |
number
|
1
Min: -1 Max: 3 |
Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
|
extra_lora |
string
|
Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
|
|
extra_lora_scale |
number
|
1
Min: -1 Max: 3 |
Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
|
Output schema
The shape of the response you’ll get when you run this model with an API.
{'items': {'format': 'uri', 'type': 'string'},
'title': 'Output',
'type': 'array'}