wcarle / stable-diffusion-videos

Generate videos by interpolating the latent space of Stable Diffusion

  • Public
  • 934 runs
  • GitHub
  • License

Input

Output

Run time and cost

This model costs approximately $0.60 to run on Replicate, or 1 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 8 minutes. The predict time for this model varies significantly based on the inputs.

Readme

stable-diffusion-videos

This is an updated version of nateraw’s model: https://replicate.com/nateraw/stable-diffusion-videos https://github.com/nateraw/stable-diffusion-videos

That one hasn’t been updated with the latest changes from the project so I forked it and pushed up a new version

Stable-diffusion-videos allows you to generate videos by interpolating the latent space of Stable Diffusion.

You can either dream up different versions of the same prompt, or morph between different text prompts (with seeds set for each for reproducibility).