ductridev / animate-diff

Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning

  • Public
  • 271 runs
  • GitHub
  • Paper
  • License

Run ductridev/animate-diff with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
model
string (enum)
ToonYou

Options:

ToonYou, Lyriel, RcnzCartoon, MajicMix, RealisticVision, Tusun, FilmVelvia, GhibliBackground, Genshen Impact-Yoimiya

Model to run. All model provide at civitai.com text-to-image models
motion_model
string (enum)
Motion V14 Checkpoint

Options:

Motion V14 Checkpoint, Motion V15 Checkpoint

Model to run. All model provide at civitai.com text-to-image models
additional_model
string (enum)
None

Options:

None, Hold Sign

Additional model to combine with main model.
init_image
string
Init image
prompt
string
anime
The prompt or prompts to guide the video generation.
negative_prompt
string
noise, text, nude
The prompt or prompts not to guide the video generation.
cfg_scale
number
9

Min: 1

Max: 20

Scale for classifier-free guidance (minimum: 1; maximum: 20)
height
integer
512

Min: 512

Height of the generated video
width
integer
512

Min: 512

Width of the generated video
num_inference_steps
integer
25

Min: 1

Max: 500

Number of denoising steps (minimum: 1; maximum: 500)
seed
integer
Random seed. Leave blank to randomize the seed
lora_alpha
number
0.6

Min: 0.1

Max: 1

scaling factor for the weight matrices
nums_frame
integer
16

Min: 8

Number frames of video
fps
integer
24
fps for the output video

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "string",
  "title": "Output",
  "format": "uri"
}