wcarle / stable-diffusion-videos-openjourney

Generate videos by interpolating the latent space of Stable Diffusion using the Openjourney Model

  • Public
  • 4.6K runs
  • GitHub
  • License

Input

Output

Run time and cost

This model runs on Nvidia A100 (40GB) GPU hardware. Predictions typically complete within 12 minutes. The predict time for this model varies significantly based on the inputs.

Readme

stable-diffusion-videos-openjourney

Based on nateraw’s project: https://replicate.com/nateraw/stable-diffusion-videos https://github.com/nateraw/stable-diffusion-videos

Swapped out the standard stable diffusion model with Openjourney: https://huggingface.co/prompthero/openjourney

Stable-diffusion-videos allows you to generate videos by interpolating the latent space of Stable Diffusion.

You can either dream up different versions of the same prompt, or morph between different text prompts (with seeds set for each for reproducibility).