thudm / cogvideox-i2v

Image-to-Video Diffusion Models with An Expert Transformer

  • Public
  • 910 runs
  • L40S
  • GitHub
  • Weights
  • Paper
  • License

Input

image
string
Shift + Return to add a new line

Input prompt

Default: "Starry sky slowly rotating."

*file

Input image

integer
(minimum: 1, maximum: 500)

Number of denoising steps

Default: 50

number
(minimum: 1, maximum: 20)

Scale for classifier-free guidance

Default: 6

integer

Number of frames for the output video

Default: 49

integer

Random seed. Leave blank to randomize the seed

Output

Generated in

Run time and cost

This model costs approximately $0.48 to run on Replicate, or 2 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 9 minutes.

Readme

CogVideoX

This is the image-to-video generation demo, for text-to-video go to https://replicate.com/chenxwh/cogvideox-t2v

CogVideoX is an open-source version of the video generation model originating from QingYing.

Citation

🌟 If you find our work helpful, please leave us a star and cite our paper.

@article{yang2024cogvideox,
  title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
  author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
  journal={arXiv preprint arXiv:2408.06072},
  year={2024}
}

We welcome your contributions! You can click here for more information.

License Agreement

The code in this repository is released under the Apache 2.0 License.

The CogVideoX-5B model (Transformers module, include I2V and T2V) is released under the CogVideoX LICENSE.