lucataco / vseq2vseq

Text to video diffusion model with variable length frame conditioning for infinite length video

  • Public
  • 417 runs
  • A100 (80GB)
  • GitHub
  • Paper
  • License

Input

string
Shift + Return to add a new line

Input prompt

Default: "A stormtrooper surfing on the ocean"

integer

Guidance scale

Default: 20

integer

Individually scale the image guidance

Default: 12

integer

Frames per second

Default: 16

integer

Number of frames

Default: 24

integer

Width

Default: 384

integer

Height

Default: 192

integer

Image width

Default: 1152

integer

Image height

Default: 640

integer

Number of steps

Default: 30

integer

Times

Default: 8

Output

Generated in

This example was created by a different version, lucataco/vseq2vseq:76d6ba45.

Run time and cost

This model costs approximately $0.28 to run on Replicate, or 3 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 4 minutes. The predict time for this model varies significantly based on the inputs.

Readme

Implementation of motexture/vseq2vseq

Usage

Increase the –times parameter to create even longer videos.

Additional info

For best results –num-frames should be 16, 24 or 32. Higher values will result in slower motion.