okaris / omni-bg

  • Public
  • 49 runs

Run okaris/omni-bg with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
job_id
string
Job ID for the model. Randomly generated if not provided
image
string
Base image for the model
subject
string
Subject of the image
placed_on
string
Where the subject is placed
background
string
Background description
camera_orientation
string
Camera orientation
camera_angle
string
Camera angle
main_light_direction
string
Main light direction
proposed_background
string
Proposed background
proposed_brightness
number
Proposed brightness
prompt_suffix
string
,high quality, medium format, f/32, realistic, cinematic, RAW
Prompt suffix
prompt_prefix
string
photograph
Starting prompt for the model. Gets added with other variables as needed
negative_prompt
string
Negative prompt for the model. Not necessary, but can be used to guide the model
override_prompt
string
Override prompt for the model. Not necessary, but can be used to guide the model
background_brightness
number
0.5

Min: -1

Max: 1

Background brightness for the model
force_center
boolean
False
Force the subject to be centered
seed
integer
-1
Random seed for the model. Leave as -1 for random seed
height
integer
768
Height of the output image. Must be divisible by 32, but this is handled automatically
width
integer
1024
Width of the output image. Must be divisible by 32, but this is handled automatically
final_step_upscale_4x
boolean
False
Final step upscale 4x for the model
debug_mode
boolean
False
Debug mode for the model
object_padding
number
0.1

Max: 1

Object padding for the model. Percent of the object width
first_pass_steps
integer
10

Min: 10

Max: 50

Number of steps for the first pass
first_pass_i2i_strength
number
1

Max: 1

I2I strength for the first pass. Must be 1.0 unless you feel like experimenting
first_pass_pag_scale
number
2

Min: 0.5

Max: 5

PAG scale for the first pass. 1 or 2 seem to work best
first_pass_guidance_scale
number
4

Min: 1

Max: 14

Guidance scale for the first pass
first_pass_outpaint_strength
number
1

Max: 1

Outpaint strength for the first pass
first_pass_depth_strength
number
0

Max: 1

Depth strength for the first pass. For the first pass we want more flexibility, so 0.0 is a good starting point
first_pass_line_strength
number
1

Max: 1

Line strength for the first pass
skip_second_pass
boolean
False
Skip the second pass
second_pass_steps
integer
10

Min: 10

Max: 50

Number of steps for the second pass
second_pass_i2i_strength
number
0.5

Max: 1

I2I strength for the second pass
second_pass_pag_scale
number
1

Min: 0.5

Max: 5

PAG scale for the second pass
second_pass_guidance_scale
number
4

Min: 1

Max: 14

Guidance scale for the second pass
second_pass_outpaint_strength
number
1

Max: 1

Outpaint strength for the second pass
second_pass_depth_strength
number
1

Max: 1

Depth strength for the second pass. We now want to lock in the depth, so 1.0 is a good starting point
second_pass_line_strength
number
1

Max: 1

Line strength for the second pass
skip_light_pass
boolean
False
Skip the light pass
light_pass_steps
integer
10

Min: 10

Max: 50

Number of steps for the light pass. This is a two step process and takes 1.9x steps
light_pass_guidance_scale
number
7

Min: 1

Max: 14

Guidance scale for the light pass
light_mix_scale
number
0.5

Max: 1

Mix scale for the light pass. 0.5 is a good starting point

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "array",
  "items": {
    "type": "string",
    "format": "uri"
  },
  "title": "Output",
  "x-cog-array-type": "iterator"
}