lucataco / animate-diff

Animate Your Personalized Text-to-Image Diffusion Models

  • Public
  • 291.4K runs
  • A100 (80GB)
  • GitHub
  • Paper
  • License

Input

string

Select a Motion Model

Default: "mm_sd_v14"

string

Select a Module

Default: "toonyou_beta3.safetensors"

string
Shift + Return to add a new line

Input prompt

Default: "masterpiece, best quality, 1girl, solo, cherry blossoms, hanami, pink flower, white flower, spring season, wisteria, petals, flower, plum blossoms, outdoors, falling petals, white hair, black eyes"

string
Shift + Return to add a new line

Negative prompt

Default: ""

integer
(minimum: 1, maximum: 100)

Number of inference steps

Default: 25

number
(minimum: 1, maximum: 10)

guidance scale

Default: 7.5

integer

Seed (0 = random, maximum: 2147483647)

Output

Generated in

Run time and cost

This model costs approximately $0.096 to run on Replicate, or 10 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 69 seconds. The predict time for this model varies significantly based on the inputs.

Readme

About

This is my attempt at implementing AnimateDiff

Based on the original github repo: guoyww/animatediff

Support

Give me a follow if you like my work! @lucataco93