You're looking at a specific version of this model. Jump to the model overview.
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
|
|
advanced_flux_model_layers_to_patch |
string
|
double_blocks\.([0-9]+)\.(img|txt)_(mod|attn|mlp)\.(lin|qkv|proj|0|2)\.(weight|bias)=1.01
single_blocks\.([0-9]+)\.(linear[12]|modulation\.lin)\.(weight|bias)=1.01
|
A set of regular expressions and their target values. Each regular expression should be on a new line. By default all layers are set to 1.01
|
double_blocks_to_patch |
string
|
0-18
|
Specify double blocks to patch. For example: 0-18 or 0,2,4. Use 0-18 to patch all double blocks.
|
double_block_targets |
None
|
all
|
Double blocks can patch image, text or both blocks, or specific modulations, attention, and mlp layers.
|
double_block_subtype_targets |
None
|
lin, qkv, proj
|
None
|
single_blocks_to_patch |
string
|
0-37
|
Specify single blocks to patch. For example: 0-18 or 0,2,4. Use 0-37 to patch all single blocks.
|
single_block_targets |
None
|
all
|
None
|
weights_and_biases_to_patch |
None
|
weights_and_biases
|
None
|
simple_flux_model_layers_to_patch |
None
|
all layers
|
Alternatively pick a set of predefined layers to patch
|
simple_flux_model_layers_to_patch_strength |
number
|
1
Max: 2 |
Strength of the patch
|
aspect_ratio |
None
|
1:1
|
Aspect ratio for the generated image in text-to-image mode. The size will always be 1 megapixel, i.e. 1024x1024 if aspect ratio is 1:1. To use arbitrary width and height, set aspect ratio to 'custom'. Note: Ignored in img2img and inpainting modes.
|
num_outputs |
integer
|
1
Min: 1 Max: 4 |
Number of images to output.
|
num_inference_steps |
integer
|
28
Min: 1 Max: 50 |
Number of inference steps. More steps can give more detailed images, but take longer.
|
guidance_scale |
number
|
3
Max: 10 |
Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
|
max_shift |
number
|
1.15
Max: 10 |
Maximum shift
|
base_shift |
number
|
0.5
Max: 10 |
Base shift
|
output_format |
None
|
webp
|
Format of the output images
|
output_quality |
integer
|
95
Max: 100 |
Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality.
|
seed |
integer
|
Set a seed for reproducibility. Random by default.
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'items': {'format': 'uri', 'type': 'string'},
'title': 'Output',
'type': 'array'}