lucataco / vseq2vseq

Text to video diffusion model with variable length frame conditioning for infinite length video

  • Public
  • 393 runs
  • GitHub
  • Paper
  • License

Input

Output

Run time and cost

This model costs approximately $0.28 to run on Replicate, or 3 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 4 minutes. The predict time for this model varies significantly based on the inputs.

Readme

Implementation of motexture/vseq2vseq

Usage

Increase the –times parameter to create even longer videos.

Additional info

For best results –num-frames should be 16, 24 or 32. Higher values will result in slower motion.