shimmercam / animatediff-v3

AnimateDiff v3 + SparseCtrl: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning. Created with Shimmer.

  • Public
  • 699 runs
  • A100 (80GB)
  • GitHub
  • Paper
  • License
Iterate in playground

Input

image
string
Shift + Return to add a new line

Input prompt

Default: "closeup face photo of man in black clothes, night city street, fireworks in background"

string
Shift + Return to add a new line

Negative prompt

Default: "worst quality, low quality, letterboxed"

file

Upload a controlnet image

Default: "https://raw.githubusercontent.com/guoyww/AnimateDiff/main/__assets__/demos/image/RealisticVision_firework.png"

string

Select a DreamBooth model

Default: "None"

integer

Number of inference steps

Default: 25

number

Guidance scale

Default: 8.5

integer

Video length

Default: 16

integer

Width of the output video

integer

Height of the output video

integer

Random seed

Default: -1

Output

output
Generated in

This output was created using a different version of the model, shimmercam/animatediff-v3:dd87e0a6.

Run time and cost

This model costs approximately $0.068 to run on Replicate, or 14 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 49 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Compare, edit, and share this model at https://shimmer.cam

@article{guo2023animatediff, title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning}, author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Wang, Yaohui and Qiao, Yu and Lin, Dahua and Dai, Bo}, journal={arXiv preprint arXiv:2307.04725}, year={2023} }

@article{guo2023sparsectrl, title={SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models}, author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Agrawala, Maneesh and Lin, Dahua and Dai, Bo}, journal={arXiv preprint arXiv:2311.16933}, year={2023} }