lucataco / animate-diff-sdxl-lcm

Animate Your Personalized Text-to-Image Diffusion Models with SDXL and LCM (Updated 1 year, 7 months ago)

  • Public
  • 332 runs
  • Paper
  • License
Iterate in playground

Input

string

Select a Motion Model (currently only one available)

Default: "mm_sdxl_v10_beta"

string

Select a model checkpoint

Default: "dynavision"

boolean

Default: false

string

Aspect ratio

Default: "1:1"

integer
(minimum: 16)

Video length

Default: 16

string
Shift + Return to add a new line

Input prompt

Default: "A panda standing on a surfboard in the ocean in sunset, 4k, high resolution. Realistic, Cinematic, high resolution"

string
Shift + Return to add a new line

Negative prompt

Default: ""

integer
(minimum: 1, maximum: 100)

Number of inference steps

Default: 6

number
(minimum: 1, maximum: 10)

guidance scale

Default: 1

integer
(minimum: 0, maximum: 2147483647)

Seed (0 = random, maximum: 2147483647)

boolean

Returns .mp4 if true or .gif if false

Default: true

Output

Generated in

Run time and cost

This model costs approximately $0.40 to run on Replicate, or 2 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 5 minutes. The predict time for this model varies significantly based on the inputs.

Readme

https://github.com/guoyww/AnimateDiff/tree/sdxl/

A beta-version of motion module for SDXL Now you can generate high-resolution videos on SDXL with/without personalized models. Checkpoint with better quality would be available soon. Stay tuned.