cjwbw / kandinskyvideo

text-to-video generation model

  • Public
  • 1.2K runs
  • A100 (80GB)
  • GitHub
  • Paper
  • License

Input

string
Shift + Return to add a new line

Input prompt.

Default: "a red car is drifting on the mountain road, close view, fast movement"

string
Shift + Return to add a new line

Specify things to not see in the output

integer

Width of output video. Lower the setting if out of memory.

Default: 640

integer

Height of output video. Lower the setting if out of memory.

Default: 384

integer

Number of denoising steps

Default: 50

number

Scale for classifier-free guidance

Default: 5

number

Scale for interpolation guidance

Default: 0.25

string

An enumeration.

Default: "low"

integer

fps for the output video.

Default: 10

Output

Generated in

Run time and cost

This model costs approximately $0.23 to run on Replicate, or 4 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 3 minutes. The predict time for this model varies significantly based on the inputs.

Readme

Kandinsky Video — a new text-to-video generation model

SoTA quality among open-source solutions


Kandinsky Video is a text-to-video generation model, which is based on the FusionFrames architecture, consisting of two main stages: keyframe generation and interpolation. Our approach for temporal conditioning allows us to generate videos with high-quality appearance, smoothness and dynamics.

Pipeline


The encoded text prompt enters the U-Net keyframe generation model with temporal layers or blocks, and then the sampled latent keyframes are sent to the latent interpolation model in such a way as to predict three interpolation frames between two keyframes. A temporal MoVQ-GAN decoder is used to get the final video result.

Architecture details

  • Text encoder (Flan-UL2) - 8.6B
  • Latent Diffusion U-Net3D - 4.0B
  • MoVQ encoder/decoder - 256M

BibTeX

If you use our work in your research, please cite our publication:

@article{arkhipkin2023fusionframes,
  title     = {FusionFrames: Efficient Architectural Aspects for Text-to-Video Generation Pipeline},
  author    = {Arkhipkin, Vladimir and Shaheen, Zein and Vasilev, Viacheslav and Dakhova, Elizaveta and Kuznetsov, Andrey and Dimitrov, Denis},
  journal   = {arXiv preprint arXiv:2311.13073},
  year      = {2023}, 
}