ji4chenli / t2v-turbo

Fast and High-Quality Text-to-video Generation

  • Public
  • 4.5K runs
  • L40S
  • GitHub
  • Paper

Input

string
Shift + Return to add a new line

Input prompt

Default: "With the style of low-poly game art, A majestic, white horse gallops gracefully across a moonlit beach"

integer
(minimum: 1, maximum: 8)

Number of denoising steps

Default: 4

number
(minimum: 1, maximum: 20)

Scale for classifier-free guidance

Default: 7.5

integer

Number of Video Frames

Default: 16

integer

FPS of the output video.

Default: 8

integer

Random seed. Leave blank to randomize the seed

Output

Generated in

Run time and cost

This model costs approximately $0.021 to run on Replicate, or 47 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 23 seconds. The predict time for this model varies significantly based on the inputs.

Readme

T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward Feedback

Fast and High-Quality Text-to-video Generation 🚀

This demo uses the T2V-Turbo (VC2) model, with resolution of 320x512

Citation

@misc{li2024t2vturbo,
      title={T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward Feedback}, 
      author={Jiachen Li and Weixi Feng and Tsu-Jui Fu and Xinyi Wang and Sugato Basu and Wenhu Chen and William Yang Wang},
      year={2024},
      eprint={2405.18750},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}