replicategithubwc / stable-video-diffusion

  • Public
  • 11 runs

Run replicategithubwc/stable-video-diffusion with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
astronaut riding a horse on mars, beautiful, 8k, perfect, award winning, national geographic
Input prompt
negative_prompt
string
very blue, dust, noisy, washed out, ugly, distorted, broken
Specify things to not see in the output
width
integer
1024
Width of output image
height
integer
576
Height of output image
num_frames
integer
24
Number of frames for the output video
fps
integer
24
fps for the output video
num_inference_steps
integer
50

Min: 1

Max: 500

Number of denoising steps
guidance_scale
number
12.5

Min: 1

Max: 20

Scale for classifier-free guidance
scheduler
string (enum)
K_EULER_ANCESTRAL

Options:

DDIM, K_EULER, DPMSolverMultistep, K_EULER_ANCESTRAL, PNDM, KLMS, DEISMultistepScheduler, DPM++_2M_Karras

Choose a scheduler.
seed
integer
Random seed. Leave blank to randomize the seed

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "string",
  "title": "Output",
  "format": "uri"
}