You're looking at a specific version of this model. Jump to the model overview.
tiger-ai-lab /anyv2v:7f67be44
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
video |
string
|
Input video
|
|
instruct_pix2pix_prompt |
string
|
turn man into robot
|
The first step invovles using timbrooks/instruct-pix2pix to edit the first frame. Specify the prompt for editing the first frame.
|
editing_prompt |
string
|
a man doing exercises for the body and mind
|
Describe the input video
|
editing_negative_prompt |
string
|
Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms
|
Things not to see int the edited video
|
num_inference_steps |
integer
|
50
Min: 1 Max: 500 |
Number of denoising steps
|
guidance_scale |
number
|
9
Min: 1 Max: 20 |
Scale for classifier-free guidance
|
ddim_inversion_steps |
integer
|
500
|
Number of ddim inversion steps
|
seed |
integer
|
Random seed. Leave blank to randomize the seed
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'format': 'uri', 'title': 'Output', 'type': 'string'}