cuuupid / cogvideox-5b

Generate high quality videos from a prompt

  • Public
  • 1.5K runs
  • GitHub
  • Paper
  • License

CogVideoX is an open-source version of the video generation model originating from QingYing. The table below displays the list of video generation models we currently offer, along with their foundational information.

Model Name CogVideoX-2B CogVideoX-5B
Model Description Entry-level model, balancing compatibility. Low cost for running and secondary development. Larger model with higher video generation quality and better visual effects.
Inference Precision FP16* (Recommended), BF16, FP32, FP8*, INT8, no support for INT4 BF16 (Recommended), FP16, FP32, FP8*, INT8, no support for INT4
Single GPU VRAM Consumption FP16: 18GB using SAT / 12.5GB* using diffusers
INT8: 7.8GB* using diffusers
BF16: 26GB using SAT / 20.7GB* using diffusers
INT8: 11.4GB* using diffusers
Multi-GPU Inference VRAM Consumption FP16: 10GB* using diffusers BF16: 15GB* using diffusers
Inference Speed
(Step = 50, FP/BF16)
Single A100: ~90 seconds
Single H100: ~45 seconds
Single A100: ~180 seconds
Single H100: ~90 seconds
Fine-tuning Precision FP16 BF16
Fine-tuning VRAM Consumption (per GPU) 47 GB (bs=1, LORA)
61 GB (bs=2, LORA)
62GB (bs=1, SFT)
63 GB (bs=1, LORA)
80 GB (bs=2, LORA)
75GB (bs=1, SFT)
Prompt Language English*
Prompt Length Limit 226 Tokens
Video Length 6 Seconds
Frame Rate 8 Frames per Second
Video Resolution 720 x 480, no support for other resolutions (including fine-tuning)
Positional Encoding 3d_sincos_pos_embed 3d_rope_pos_embed
Download Page (Diffusers) 🤗 HuggingFace
🤖 ModelScope
🟣 WiseModel
🤗 HuggingFace
🤖 ModelScope
🟣 WiseModel
Download Page (SAT) SAT

Data Explanation

  • When testing with the diffusers library, the enable_model_cpu_offload() option and pipe.vae.enable_tiling() optimization were enabled. This solution has not been tested for actual VRAM/memory usage on devices other than NVIDIA A100/H100. Generally, this solution can be adapted to all devices with NVIDIA Ampere architecture and above. If optimization is disabled, VRAM usage will increase significantly, with peak VRAM approximately 3 times the value in the table.
  • When performing multi-GPU inference, the enable_model_cpu_offload() optimization needs to be disabled.
  • Using an INT8 model will result in reduced inference speed. This is done to accommodate GPUs with lower VRAM, allowing inference to run properly with minimal video quality loss, though the inference speed will be significantly reduced.
  • The 2B model is trained using FP16 precision, while the 5B model is trained using BF16 precision. It is recommended to use the precision used in model training for inference.
  • FP8 precision must be used on NVIDIA H100 and above devices, requiring source installation of the torch, torchao, diffusers, and accelerate Python packages. CUDA 12.4 is recommended.
  • Inference speed testing also used the aforementioned VRAM optimization scheme. Without VRAM optimization, inference speed increases by about 10%. Only models using diffusers support quantization.
  • The model only supports English input; other languages can be translated to English during large model refinements.

Citation

🌟 If you find our work helpful, please leave us a star and cite our paper.

@article{yang2024cogvideox,
  title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
  author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
  journal={arXiv preprint arXiv:2408.06072},
  year={2024}
}
@article{hong2022cogvideo,
  title={CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers},
  author={Hong, Wenyi and Ding, Ming and Zheng, Wendi and Liu, Xinghan and Tang, Jie},
  journal={arXiv preprint arXiv:2205.15868},
  year={2022}
}