
lightweight-ai/w.2_f-s
Public
46
runs
Run lightweight-ai/w.2_f-s with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
start_image |
string
|
The starting frame of the video.
|
|
end_image |
string
|
The final frame of the video.
|
|
resolution |
None
|
480p
|
Set the output video resolution. 720p requires more VRAM.
|
prompt |
string
|
Describe the transition or motion between the two images.
|
|
negative_prompt |
string
|
Negative prompt.
|
|
duration_seconds |
number
|
2.1
Min: 0.5 Max: 5.1 |
Video duration in seconds. Clamped to model's 8-81 frames.
|
steps |
integer
|
8
Min: 1 Max: 50 |
Number of inference steps.
|
guidance_scale |
number
|
1
Max: 10 |
Guidance scale for high noise levels.
|
guidance_scale_2 |
number
|
1
Max: 10 |
Guidance scale for low noise levels.
|
seed |
integer
|
Random seed. Leave blank to randomize the seed.
|
{
"type": "object",
"title": "Input",
"required": [
"start_image",
"end_image",
"prompt"
],
"properties": {
"seed": {
"type": "integer",
"title": "Seed",
"x-order": 9,
"description": "Random seed. Leave blank to randomize the seed."
},
"steps": {
"type": "integer",
"title": "Steps",
"default": 8,
"maximum": 50,
"minimum": 1,
"x-order": 6,
"description": "Number of inference steps."
},
"prompt": {
"type": "string",
"title": "Prompt",
"x-order": 3,
"description": "Describe the transition or motion between the two images."
},
"end_image": {
"type": "string",
"title": "End Image",
"format": "uri",
"x-order": 1,
"description": "The final frame of the video."
},
"resolution": {
"enum": [
"480p",
"720p"
],
"type": "string",
"title": "resolution",
"description": "Set the output video resolution. 720p requires more VRAM.",
"default": "480p",
"x-order": 2
},
"start_image": {
"type": "string",
"title": "Start Image",
"format": "uri",
"x-order": 0,
"description": "The starting frame of the video."
},
"guidance_scale": {
"type": "number",
"title": "Guidance Scale",
"default": 1,
"maximum": 10,
"minimum": 0,
"x-order": 7,
"description": "Guidance scale for high noise levels."
},
"negative_prompt": {
"type": "string",
"title": "Negative Prompt",
"x-order": 4,
"description": "Negative prompt."
},
"duration_seconds": {
"type": "number",
"title": "Duration Seconds",
"default": 2.1,
"maximum": 5.1,
"minimum": 0.5,
"x-order": 5,
"description": "Video duration in seconds. Clamped to model's 8-81 frames."
},
"guidance_scale_2": {
"type": "number",
"title": "Guidance Scale 2",
"default": 1,
"maximum": 10,
"minimum": 0,
"x-order": 8,
"description": "Guidance scale for low noise levels."
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{
"type": "string",
"title": "Output",
"format": "uri"
}