chenxwh / ltx-video

DiT-based video generation model for generating high-quality videos in real-time

  • Public
  • 379 runs
  • GitHub
  • Weights
  • License

Run time and cost

This model costs approximately $0.040 to run on Replicate, or 25 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 56 seconds. The predict time for this model varies significantly based on the inputs.

Readme

LTX-Video

Introduction

LTX-Video is the first DiT-based video generation model that can generate high-quality videos in real-time. It can generate 24 FPS videos at 768x512 resolution, faster than it takes to watch them. The model is trained on a large-scale dataset of diverse videos and can generate high-resolution videos with realistic and diverse content.## More to come…

Acknowledgement

We are grateful for the following awesome projects when implementing LTX-Video: * DiT and PixArt-alpha: vision transformers for image generation.