Readme
flux-to-wan
Generate videos from descriptions using Flux-finetuned image models and WAN 2.1 video generation.
https://replicate.com/pipelines-beta/flux-finetune-image-to-wan-video
See also:
- flux-lora-video – basically the same thing, but with Claude-powered prompt enhancement, and it uses Minimax’s video generation model instead of WAN 2.1
Features
- Create high-quality videos from text prompts
- Use any Flux-finetuned model as the image source
- Seamlessly converts still images into smooth 720p video
Models
Under the hood it uses these models:
- wavespeedai/wan-2.1-i2v-720p: A powerful image-to-video model that transforms still images into cinematic videos
- Flux finetuned models (default: zeke/ziki-flux): Custom image generation models trained using Flux
How it works
This pipeline takes your text prompt and passes it to a Flux-finetuned image model to generate a still image. This image is then fed into WAN 2.1, along with your original prompt, to create a fluid video animation that brings your concept to life. Just remember to include your Flux model’s trigger word in the prompt for best results.