wcarle / stable-diffusion-videos-ghibli

Generate videos by interpolating the latent space of Stable Diffusion using the Ghibli-Diffusion Model

  • Public
  • 397 runs
  • GitHub
  • License

Input

Output

Run time and cost

This model runs on Nvidia A100 (40GB) GPU hardware. Predictions typically complete within 7 minutes. The predict time for this model varies significantly based on the inputs.

Readme

stable-diffusion-videos-ghibly

Based on nateraw’s project: https://replicate.com/nateraw/stable-diffusion-videos https://github.com/nateraw/stable-diffusion-videos

Swapped out the standard stable diffusion model with Ghibly-Diffusion: https://huggingface.co/nitrosocke/Ghibli-Diffusion

Stable-diffusion-videos allows you to generate videos by interpolating the latent space of Stable Diffusion.

You can either dream up different versions of the same prompt, or morph between different text prompts (with seeds set for each for reproducibility).