lucataco / wan2.1-i2v-lora

Wan2.1 14B 480p LoRA inference via Diffusers (Work in progress)

  • Public
  • 423 runs
  • A100 (80GB)
  • GitHub
  • Weights
  • License
Iterate in playground
Run with an API

Input

image
*file

Input image

*string
Shift + Return to add a new line

Text prompt describing the desired video effect

string
Shift + Return to add a new line

Text prompt describing what to avoid in the video

Default: "low quality, bad quality, blurry, pixelated, watermark"

*string
Shift + Return to add a new line

URL to the LoRA weights (HuggingFace or CivitAI)

number
(minimum: 0, maximum: 2)

LoRA effect strength

Default: 1

number
(minimum: 1, maximum: 5)

Video duration in seconds

Default: 3

integer
(minimum: 7, maximum: 30)

Frames per second

Default: 16

number
(minimum: 1, maximum: 20)

Guidance scale for generation

Default: 5

integer
(minimum: 1, maximum: 100)

Number of inference steps

Default: 28

string

Image resizing strategy

Default: "auto"

integer

Random seed for generation

Output

Generated in

Run time and cost

This model costs approximately $0.47 to run on Replicate, or 2 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 6 minutes. The predict time for this model varies significantly based on the inputs.

Readme

Work In Progress

Cog model to run Wan2.1 with image to video inference with LoRAs like

Remade-AIs collection