Readme
Work In Progress
Cog model to run Wan2.1 with image to video inference with LoRAs like
Wan2.1 14B 480p LoRA inference via Diffusers (Work in progress)
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run lucataco/wan2.1-i2v-lora using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"lucataco/wan2.1-i2v-lora:f130c663a974802a8e5826a49b82064a27d5ffeec8f008ae3195a70e49527a97",
{
input: {
fps: 16,
image: "https://replicate.delivery/pbxt/Mf7Um7V1nQjebWjOi3RndihR5sK95269LxZDY8s17mqW5jda/dog-1024.png",
prompt: "In the video, a miniature dog is presented. The dog is held in a person's hands. The person then presses on the dog, causing a sq41sh squish effect. The person keeps pressing down on the dog, further showing the sq41sh squish effect.",
duration: 3,
lora_url: "https://huggingface.co/Remade-AI/Squish/resolve/main/squish_18.safetensors",
resize_mode: "auto",
lora_strength: 1,
guidance_scale: 5,
negative_prompt: "low quality, bad quality, blurry, pixelated, watermark",
num_inference_steps: 40
}
}
);
// To access the file URL:
console.log(output.url()); //=> "http://example.com"
// To write the file to disk:
fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run lucataco/wan2.1-i2v-lora using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"lucataco/wan2.1-i2v-lora:f130c663a974802a8e5826a49b82064a27d5ffeec8f008ae3195a70e49527a97",
input={
"fps": 16,
"image": "https://replicate.delivery/pbxt/Mf7Um7V1nQjebWjOi3RndihR5sK95269LxZDY8s17mqW5jda/dog-1024.png",
"prompt": "In the video, a miniature dog is presented. The dog is held in a person's hands. The person then presses on the dog, causing a sq41sh squish effect. The person keeps pressing down on the dog, further showing the sq41sh squish effect.",
"duration": 3,
"lora_url": "https://huggingface.co/Remade-AI/Squish/resolve/main/squish_18.safetensors",
"resize_mode": "auto",
"lora_strength": 1,
"guidance_scale": 5,
"negative_prompt": "low quality, bad quality, blurry, pixelated, watermark",
"num_inference_steps": 40
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/wan2.1-i2v-lora using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "lucataco/wan2.1-i2v-lora:f130c663a974802a8e5826a49b82064a27d5ffeec8f008ae3195a70e49527a97",
"input": {
"fps": 16,
"image": "https://replicate.delivery/pbxt/Mf7Um7V1nQjebWjOi3RndihR5sK95269LxZDY8s17mqW5jda/dog-1024.png",
"prompt": "In the video, a miniature dog is presented. The dog is held in a person\'s hands. The person then presses on the dog, causing a sq41sh squish effect. The person keeps pressing down on the dog, further showing the sq41sh squish effect.",
"duration": 3,
"lora_url": "https://huggingface.co/Remade-AI/Squish/resolve/main/squish_18.safetensors",
"resize_mode": "auto",
"lora_strength": 1,
"guidance_scale": 5,
"negative_prompt": "low quality, bad quality, blurry, pixelated, watermark",
"num_inference_steps": 40
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Add a payment method to run this model.
By signing in, you agree to our
terms of service and privacy policy
{
"completed_at": "2025-03-15T07:09:14.636690Z",
"created_at": "2025-03-15T07:05:10.835000Z",
"data_removed": false,
"error": null,
"id": "p0xwwxqvpdrj00cnk2fawv6kbr",
"input": {
"fps": 16,
"image": "https://replicate.delivery/pbxt/Mf7Um7V1nQjebWjOi3RndihR5sK95269LxZDY8s17mqW5jda/dog-1024.png",
"prompt": "In the video, a miniature dog is presented. The dog is held in a person's hands. The person then presses on the dog, causing a sq41sh squish effect. The person keeps pressing down on the dog, further showing the sq41sh squish effect.",
"duration": 3,
"lora_url": "https://huggingface.co/Remade-AI/Squish/resolve/main/squish_18.safetensors",
"resize_mode": "auto",
"lora_strength": 1,
"guidance_scale": 5,
"negative_prompt": "low quality, bad quality, blurry, pixelated, watermark",
"num_inference_steps": 40
},
"logs": "Starting prediction with: prompt='In the video, a miniature dog is presented. The dog is held in a person's hands. The person then presses on the dog, causing a sq41sh squish effect. The person keeps pressing down on the dog, further showing the sq41sh squish effect.', lora_strength=1.0, duration=3.0s\nCalculated 49 frames for 3.0s at 16 fps\nLoading input image from: /tmp/tmpl6fn4nyqdog-1024.png\nImage loaded successfully: 1001x991\nAuto-selected fixed square dimensions: 512x512 for aspect ratio 0.99\nFinal dimensions: 512x512\nDownloading LoRA from: https://huggingface.co/Remade-AI/Squish/resolve/main/squish_18.safetensors\nDownloading from HuggingFace: repo_id=Remade-AI/Squish, filename=squish_18.safetensors\nLoading LoRA weights from: /root/.cache/huggingface/hub/models--Remade-AI--Squish/snapshots/70e32c7d833743c88352e2ac973968417fb5b051/squish_18.safetensors with strength: 1.0\nLoRA weights loaded successfully\nStarting video generation: 49 frames, 40 steps, guidance=5.0\n 0%| | 0/40 [00:00<?, ?it/s]\n 2%|▎ | 1/40 [00:05<03:48, 5.87s/it]\n 5%|▌ | 2/40 [00:11<03:31, 5.55s/it]\n 8%|▊ | 3/40 [00:17<03:31, 5.71s/it]\n 10%|█ | 4/40 [00:22<03:28, 5.78s/it]\n 12%|█▎ | 5/40 [00:28<03:23, 5.82s/it]\n 15%|█▌ | 6/40 [00:34<03:19, 5.85s/it]\n 18%|█▊ | 7/40 [00:40<03:13, 5.87s/it]\n 20%|██ | 8/40 [00:46<03:08, 5.89s/it]\n 22%|██▎ | 9/40 [00:52<03:02, 5.90s/it]\n 25%|██▌ | 10/40 [00:58<02:57, 5.91s/it]\n 28%|██▊ | 11/40 [01:04<02:51, 5.91s/it]\n 30%|███ | 12/40 [01:10<02:45, 5.92s/it]\n 32%|███▎ | 13/40 [01:16<02:39, 5.92s/it]\n 35%|███▌ | 14/40 [01:22<02:33, 5.92s/it]\n 38%|███▊ | 15/40 [01:28<02:28, 5.92s/it]\n 40%|████ | 16/40 [01:34<02:22, 5.92s/it]\n 42%|████▎ | 17/40 [01:39<02:16, 5.92s/it]\n 45%|████▌ | 18/40 [01:45<02:10, 5.92s/it]\n 48%|████▊ | 19/40 [01:51<02:04, 5.92s/it]\n 50%|█████ | 20/40 [01:57<01:58, 5.92s/it]\n 52%|█████▎ | 21/40 [02:03<01:52, 5.92s/it]\n 55%|█████▌ | 22/40 [02:09<01:46, 5.92s/it]\n 57%|█████▊ | 23/40 [02:15<01:40, 5.92s/it]\n 60%|██████ | 24/40 [02:21<01:34, 5.92s/it]\n 62%|██████▎ | 25/40 [02:27<01:28, 5.92s/it]\n 65%|██████▌ | 26/40 [02:33<01:22, 5.92s/it]\n 68%|██████▊ | 27/40 [02:39<01:17, 5.92s/it]\n 70%|███████ | 28/40 [02:45<01:11, 5.93s/it]\n 72%|███████▎ | 29/40 [02:51<01:05, 5.92s/it]\n 75%|███████▌ | 30/40 [02:56<00:59, 5.93s/it]\n 78%|███████▊ | 31/40 [03:02<00:53, 5.93s/it]\n 80%|████████ | 32/40 [03:08<00:47, 5.92s/it]\n 82%|████████▎ | 33/40 [03:14<00:41, 5.93s/it]\n 85%|████████▌ | 34/40 [03:20<00:35, 5.92s/it]\n 88%|████████▊ | 35/40 [03:26<00:29, 5.92s/it]\n 90%|█████████ | 36/40 [03:32<00:23, 5.92s/it]\n 92%|█████████▎| 37/40 [03:38<00:17, 5.92s/it]\n 95%|█████████▌| 38/40 [03:44<00:11, 5.92s/it]\n 98%|█████████▊| 39/40 [03:50<00:05, 5.92s/it]\n100%|██████████| 40/40 [03:56<00:00, 5.92s/it]\n100%|██████████| 40/40 [03:56<00:00, 5.90s/it]\nVideo generation completed in 240.27 seconds\nExporting video to: /tmp/tmpbh4k1cz0/output.mp4 at 16 fps\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\nTo disable this warning, you can either:\n- Avoid using `tokenizers` before the fork if possible\n- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\nVideo saved successfully to: /tmp/tmpbh4k1cz0/output.mp4\nCleaned up temporary LoRA file: /root/.cache/huggingface/hub/models--Remade-AI--Squish/snapshots/70e32c7d833743c88352e2ac973968417fb5b051/squish_18.safetensors",
"metrics": {
"predict_time": 243.794411078,
"total_time": 243.80169
},
"output": "https://replicate.delivery/yhqm/X0yqkQLeLfuNUESVZHL53DIzmRTtkNJiXFOWsZCPDyKamtYUA/output.mp4",
"started_at": "2025-03-15T07:05:10.842279Z",
"status": "succeeded",
"urls": {
"stream": "https://stream.replicate.com/v1/files/yswh-stbbisuimfyka4g75i6zoqbo5sk47km22tmjds2v32vdkwqfiowa",
"get": "https://api.replicate.com/v1/predictions/p0xwwxqvpdrj00cnk2fawv6kbr",
"cancel": "https://api.replicate.com/v1/predictions/p0xwwxqvpdrj00cnk2fawv6kbr/cancel"
},
"version": "f130c663a974802a8e5826a49b82064a27d5ffeec8f008ae3195a70e49527a97"
}
Starting prediction with: prompt='In the video, a miniature dog is presented. The dog is held in a person's hands. The person then presses on the dog, causing a sq41sh squish effect. The person keeps pressing down on the dog, further showing the sq41sh squish effect.', lora_strength=1.0, duration=3.0s
Calculated 49 frames for 3.0s at 16 fps
Loading input image from: /tmp/tmpl6fn4nyqdog-1024.png
Image loaded successfully: 1001x991
Auto-selected fixed square dimensions: 512x512 for aspect ratio 0.99
Final dimensions: 512x512
Downloading LoRA from: https://huggingface.co/Remade-AI/Squish/resolve/main/squish_18.safetensors
Downloading from HuggingFace: repo_id=Remade-AI/Squish, filename=squish_18.safetensors
Loading LoRA weights from: /root/.cache/huggingface/hub/models--Remade-AI--Squish/snapshots/70e32c7d833743c88352e2ac973968417fb5b051/squish_18.safetensors with strength: 1.0
LoRA weights loaded successfully
Starting video generation: 49 frames, 40 steps, guidance=5.0
0%| | 0/40 [00:00<?, ?it/s]
2%|▎ | 1/40 [00:05<03:48, 5.87s/it]
5%|▌ | 2/40 [00:11<03:31, 5.55s/it]
8%|▊ | 3/40 [00:17<03:31, 5.71s/it]
10%|█ | 4/40 [00:22<03:28, 5.78s/it]
12%|█▎ | 5/40 [00:28<03:23, 5.82s/it]
15%|█▌ | 6/40 [00:34<03:19, 5.85s/it]
18%|█▊ | 7/40 [00:40<03:13, 5.87s/it]
20%|██ | 8/40 [00:46<03:08, 5.89s/it]
22%|██▎ | 9/40 [00:52<03:02, 5.90s/it]
25%|██▌ | 10/40 [00:58<02:57, 5.91s/it]
28%|██▊ | 11/40 [01:04<02:51, 5.91s/it]
30%|███ | 12/40 [01:10<02:45, 5.92s/it]
32%|███▎ | 13/40 [01:16<02:39, 5.92s/it]
35%|███▌ | 14/40 [01:22<02:33, 5.92s/it]
38%|███▊ | 15/40 [01:28<02:28, 5.92s/it]
40%|████ | 16/40 [01:34<02:22, 5.92s/it]
42%|████▎ | 17/40 [01:39<02:16, 5.92s/it]
45%|████▌ | 18/40 [01:45<02:10, 5.92s/it]
48%|████▊ | 19/40 [01:51<02:04, 5.92s/it]
50%|█████ | 20/40 [01:57<01:58, 5.92s/it]
52%|█████▎ | 21/40 [02:03<01:52, 5.92s/it]
55%|█████▌ | 22/40 [02:09<01:46, 5.92s/it]
57%|█████▊ | 23/40 [02:15<01:40, 5.92s/it]
60%|██████ | 24/40 [02:21<01:34, 5.92s/it]
62%|██████▎ | 25/40 [02:27<01:28, 5.92s/it]
65%|██████▌ | 26/40 [02:33<01:22, 5.92s/it]
68%|██████▊ | 27/40 [02:39<01:17, 5.92s/it]
70%|███████ | 28/40 [02:45<01:11, 5.93s/it]
72%|███████▎ | 29/40 [02:51<01:05, 5.92s/it]
75%|███████▌ | 30/40 [02:56<00:59, 5.93s/it]
78%|███████▊ | 31/40 [03:02<00:53, 5.93s/it]
80%|████████ | 32/40 [03:08<00:47, 5.92s/it]
82%|████████▎ | 33/40 [03:14<00:41, 5.93s/it]
85%|████████▌ | 34/40 [03:20<00:35, 5.92s/it]
88%|████████▊ | 35/40 [03:26<00:29, 5.92s/it]
90%|█████████ | 36/40 [03:32<00:23, 5.92s/it]
92%|█████████▎| 37/40 [03:38<00:17, 5.92s/it]
95%|█████████▌| 38/40 [03:44<00:11, 5.92s/it]
98%|█████████▊| 39/40 [03:50<00:05, 5.92s/it]
100%|██████████| 40/40 [03:56<00:00, 5.92s/it]
100%|██████████| 40/40 [03:56<00:00, 5.90s/it]
Video generation completed in 240.27 seconds
Exporting video to: /tmp/tmpbh4k1cz0/output.mp4 at 16 fps
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Video saved successfully to: /tmp/tmpbh4k1cz0/output.mp4
Cleaned up temporary LoRA file: /root/.cache/huggingface/hub/models--Remade-AI--Squish/snapshots/70e32c7d833743c88352e2ac973968417fb5b051/squish_18.safetensors
This model costs approximately $0.47 to run on Replicate, or 2 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.
This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 6 minutes. The predict time for this model varies significantly based on the inputs.
Work In Progress
Cog model to run Wan2.1 with image to video inference with LoRAs like
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
Choose a file from your machine
Hint: you can also drag files onto the input
Starting prediction with: prompt='In the video, a miniature dog is presented. The dog is held in a person's hands. The person then presses on the dog, causing a sq41sh squish effect. The person keeps pressing down on the dog, further showing the sq41sh squish effect.', lora_strength=1.0, duration=3.0s
Calculated 49 frames for 3.0s at 16 fps
Loading input image from: /tmp/tmpl6fn4nyqdog-1024.png
Image loaded successfully: 1001x991
Auto-selected fixed square dimensions: 512x512 for aspect ratio 0.99
Final dimensions: 512x512
Downloading LoRA from: https://huggingface.co/Remade-AI/Squish/resolve/main/squish_18.safetensors
Downloading from HuggingFace: repo_id=Remade-AI/Squish, filename=squish_18.safetensors
Loading LoRA weights from: /root/.cache/huggingface/hub/models--Remade-AI--Squish/snapshots/70e32c7d833743c88352e2ac973968417fb5b051/squish_18.safetensors with strength: 1.0
LoRA weights loaded successfully
Starting video generation: 49 frames, 40 steps, guidance=5.0
0%| | 0/40 [00:00<?, ?it/s]
2%|▎ | 1/40 [00:05<03:48, 5.87s/it]
5%|▌ | 2/40 [00:11<03:31, 5.55s/it]
8%|▊ | 3/40 [00:17<03:31, 5.71s/it]
10%|█ | 4/40 [00:22<03:28, 5.78s/it]
12%|█▎ | 5/40 [00:28<03:23, 5.82s/it]
15%|█▌ | 6/40 [00:34<03:19, 5.85s/it]
18%|█▊ | 7/40 [00:40<03:13, 5.87s/it]
20%|██ | 8/40 [00:46<03:08, 5.89s/it]
22%|██▎ | 9/40 [00:52<03:02, 5.90s/it]
25%|██▌ | 10/40 [00:58<02:57, 5.91s/it]
28%|██▊ | 11/40 [01:04<02:51, 5.91s/it]
30%|███ | 12/40 [01:10<02:45, 5.92s/it]
32%|███▎ | 13/40 [01:16<02:39, 5.92s/it]
35%|███▌ | 14/40 [01:22<02:33, 5.92s/it]
38%|███▊ | 15/40 [01:28<02:28, 5.92s/it]
40%|████ | 16/40 [01:34<02:22, 5.92s/it]
42%|████▎ | 17/40 [01:39<02:16, 5.92s/it]
45%|████▌ | 18/40 [01:45<02:10, 5.92s/it]
48%|████▊ | 19/40 [01:51<02:04, 5.92s/it]
50%|█████ | 20/40 [01:57<01:58, 5.92s/it]
52%|█████▎ | 21/40 [02:03<01:52, 5.92s/it]
55%|█████▌ | 22/40 [02:09<01:46, 5.92s/it]
57%|█████▊ | 23/40 [02:15<01:40, 5.92s/it]
60%|██████ | 24/40 [02:21<01:34, 5.92s/it]
62%|██████▎ | 25/40 [02:27<01:28, 5.92s/it]
65%|██████▌ | 26/40 [02:33<01:22, 5.92s/it]
68%|██████▊ | 27/40 [02:39<01:17, 5.92s/it]
70%|███████ | 28/40 [02:45<01:11, 5.93s/it]
72%|███████▎ | 29/40 [02:51<01:05, 5.92s/it]
75%|███████▌ | 30/40 [02:56<00:59, 5.93s/it]
78%|███████▊ | 31/40 [03:02<00:53, 5.93s/it]
80%|████████ | 32/40 [03:08<00:47, 5.92s/it]
82%|████████▎ | 33/40 [03:14<00:41, 5.93s/it]
85%|████████▌ | 34/40 [03:20<00:35, 5.92s/it]
88%|████████▊ | 35/40 [03:26<00:29, 5.92s/it]
90%|█████████ | 36/40 [03:32<00:23, 5.92s/it]
92%|█████████▎| 37/40 [03:38<00:17, 5.92s/it]
95%|█████████▌| 38/40 [03:44<00:11, 5.92s/it]
98%|█████████▊| 39/40 [03:50<00:05, 5.92s/it]
100%|██████████| 40/40 [03:56<00:00, 5.92s/it]
100%|██████████| 40/40 [03:56<00:00, 5.90s/it]
Video generation completed in 240.27 seconds
Exporting video to: /tmp/tmpbh4k1cz0/output.mp4 at 16 fps
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Video saved successfully to: /tmp/tmpbh4k1cz0/output.mp4
Cleaned up temporary LoRA file: /root/.cache/huggingface/hub/models--Remade-AI--Squish/snapshots/70e32c7d833743c88352e2ac973968417fb5b051/squish_18.safetensors