phxdev1/multi-lora-wan

Public
250 runs

Run phxdev1/multi-lora-wan with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
Text prompt for video generation
negative_prompt
string
Things you do not want to see in your video
image
string
Image to use as a starting frame for image to video generation.
aspect_ratio
None
16:9
The aspect ratio of the video. 16:9, 9:16, 1:1, etc.
frames
None
81
The number of frames to generate (1 to 5 seconds)
model
None
14b
The model to use. 1.3b is faster, but 14b is better quality. A LORA either works with 1.3b or 14b, depending on the version it was trained on.
resolution
None
480p
The resolution of the video. 720p is not supported for 1.3b.
lora_url
string
Optional: The URL of a LORA to use (for single LoRA compatibility)
lora_strength_model
number
1
Strength of the LORA applied to the model. 0.0 is no LORA (for single LoRA compatibility).
lora_strength_clip
number
1
Strength of the LORA applied to the CLIP model. 0.0 is no LORA (for single LoRA compatibility).
loras
string
JSON string of LoRAs to apply. Format: "[{'url': 'lora1.safetensors', 'strength_model': 1.0, 'strength_clip': 1.0, 'enabled': true}]"
enable_lora_memory_management
boolean
False
Use progressive LoRA chaining (reduces VRAM) vs all-at-once chaining (faster but more VRAM)
fast_mode
None
Balanced
Speed up generation with different levels of acceleration. V2.1 mode uses LCM sampling for maximum speed.
sample_steps
integer
30

Min: 1

Max: 60

Number of generation steps. Fewer steps means faster generation, at the expensive of output quality. 30 steps is sufficient for most prompts
sample_guide_scale
number
5

Max: 10

Higher guide scale makes prompt adherence better, but can reduce variation
sample_shift
number
8

Max: 10

Sample shift factor
seed
integer
Set a seed for reproducibility. Random by default.
interpolation_multiplier
integer
1

Min: 1

Max: 4

Frame interpolation multiplier for smoother video (V2.1 feature). 1 = no interpolation, 2 = double frames, etc.
output_fps
number
16

Min: 8

Max: 60

Target output FPS for interpolated video (V2.1 feature)

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "array",
  "items": {
    "type": "string",
    "format": "uri"
  },
  "title": "Output"
}