You're looking at a specific version of this model. Jump to the model overview.
wavespeedai /wan-2.1-i2v-480p:ae5bc519
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
Prompt for video generation
|
|
image |
string
|
Input image to start generating from
|
|
num_frames |
integer
|
81
Min: 81 Max: 100 |
Number of video frames. 81 frames give the best results
|
max_area |
string
(enum)
|
832x480
Options: 832x480, 480x832 |
Maximum area of generated image. The input image will shrink to fit these dimensions
|
frames_per_second |
integer
|
16
Min: 5 Max: 24 |
Frames per second. Note that the pricing of this model is based on the video duration at 16 fps
|
fast_mode |
string
(enum)
|
Balanced
Options: Off, Balanced, Fast |
Speed up generation with different levels of acceleration. Faster modes may degrade quality somewhat. The speedup is dependent on the content, so different videos may see different speedups.
|
sample_steps |
integer
|
30
Min: 1 Max: 40 |
Number of generation steps. Fewer steps means faster generation, at the expensive of output quality. 30 steps is sufficient for most prompts
|
sample_guide_scale |
number
|
5
Max: 10 |
Higher guide scale makes prompt adherence better, but can reduce variation
|
sample_shift |
number
|
3
Min: 1 Max: 10 |
Sample shift factor
|
seed |
integer
|
Random seed. Leave blank for random
|
|
lora_weights |
string
|
Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
|
|
lora_scale |
number
|
1
|
Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'format': 'uri', 'title': 'Output', 'type': 'string'}