wcarle / text2video-zero

The Picsart Text2Video-Zero model leverages the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain

  • Public
  • 2.2K runs
  • A100 (80GB)
  • GitHub
  • Paper
  • License

Input

string
Shift + Return to add a new line

Input prompt.

Default: "a cat"

string
Shift + Return to add a new line

Negative prompt.

Default: ""

string
Shift + Return to add a new line

Leave blank to randomize the seed.

integer
(minimum: 1, maximum: 10)

Chunk size: Number of frames processed at once. Reduce for lower memory usage.

Default: 8

integer
(minimum: -20, maximum: 20)

Global Translation $\delta_{x}$

Default: 12

integer
(minimum: -20, maximum: 20)

Global Translation $\delta_{y}$

Default: 12

integer
(minimum: 1, maximum: 50)

Timestep t0: Perform DDPM steps from t0 to t1. The larger the gap between t0 and t1, the more variance between the frames. Ensure t0 < t1

Default: 44

integer
(minimum: 1, maximum: 50)

Timestep t1: Perform DDPM steps from t0 to t1. The larger the gap between t0 and t1, the more variance between the frames. Ensure t0 < t1

Default: 47

number
(minimum: 0, maximum: 0.9)

Ratio of how many tokens are merged. The higher the more compression (less memory and faster inference).

Default: 0

integer

Resolution of the video (square)

Default: 512

integer

Number of frames in the video

Default: 8

integer
(minimum: 5, maximum: 60)

Frame rate for the video.

Default: 15

Output

Generated in

Run time and cost

This model costs approximately $0.23 to run on Replicate, or 4 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 3 minutes. The predict time for this model varies significantly based on the inputs.

Readme

https://github.com/Picsart-AI-Research/Text2Video-Zero

Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain.

Our key modifications include enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object.

Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing.

As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data