You're looking at a specific version of this model. Jump to the model overview.

wavespeedai /wan-2.1-t2v-720p:f5576aa3

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
Text prompt for image generation
negative_prompt
string
Negative prompt to avoid certain elements
aspect_ratio
None
16:9
Aspect ratio of the output video.
fast_mode
None
Balanced
Speed up generation with different levels of acceleration. Faster modes may degrade quality somewhat. The speedup is dependent on the content, so different videos may see different speedups.
seed
integer
Random seed. Set for reproducible generation
sample_guide_scale
number
5

Min: 1

Max: 10

Guidance scale for generation
sample_steps
integer
30

Min: 1

Max: 40

Number of inference steps
sample_shift
integer
3

Max: 10

Flow shift parameter for video generation
lora_weights
string
Load LoRA weights. Supports HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet.
lora_scale
number
1

Max: 4

Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. You may still need to experiment to find the best value for your particular lora.
disable_safety_checker
boolean
False
Disable safety checker for generated videos

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'format': 'uri', 'title': 'Output', 'type': 'string'}