Readme
This model doesn't have a readme.
Make your videos van gogh-esque
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run shridharathi/van-gogh-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a",
{
input: {
frames: 81,
prompt: "GOGH style, painting of a man walking in a field with trees and sunlight",
fast_mode: "Balanced",
resolution: "480p",
aspect_ratio: "16:9",
sample_shift: 8,
sample_steps: 30,
negative_prompt: "",
lora_strength_clip: 1,
sample_guide_scale: 5,
lora_strength_model: 1
}
}
);
console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run shridharathi/van-gogh-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a",
input={
"frames": 81,
"prompt": "GOGH style, painting of a man walking in a field with trees and sunlight",
"fast_mode": "Balanced",
"resolution": "480p",
"aspect_ratio": "16:9",
"sample_shift": 8,
"sample_steps": 30,
"negative_prompt": "",
"lora_strength_clip": 1,
"sample_guide_scale": 5,
"lora_strength_model": 1
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run shridharathi/van-gogh-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a",
"input": {
"frames": 81,
"prompt": "GOGH style, painting of a man walking in a field with trees and sunlight",
"fast_mode": "Balanced",
"resolution": "480p",
"aspect_ratio": "16:9",
"sample_shift": 8,
"sample_steps": 30,
"negative_prompt": "",
"lora_strength_clip": 1,
"sample_guide_scale": 5,
"lora_strength_model": 1
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Add a payment method to run this model.
By signing in, you agree to our
terms of service and privacy policy
{
"completed_at": "2025-03-28T01:06:59.341050Z",
"created_at": "2025-03-28T01:02:47.100000Z",
"data_removed": false,
"error": null,
"id": "g648agez7hrm80cnv92s2c2xzg",
"input": {
"frames": 81,
"prompt": "GOGH style, painting of a man walking in a field with trees and sunlight",
"fast_mode": "Balanced",
"resolution": "480p",
"aspect_ratio": "16:9",
"sample_shift": 8,
"sample_steps": 30,
"negative_prompt": "",
"lora_strength_clip": 1,
"sample_guide_scale": 5,
"lora_strength_model": 1
},
"logs": "Random seed set to: 578857622\n2025-03-28T01:03:35Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpxptoopvc/weights url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar\n2025-03-28T01:03:38Z | INFO | [ Complete ] dest=/tmp/tmpxptoopvc/weights size=\"307 MB\" total_elapsed=3.163s url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar\nChecking inputs\n====================================\nChecking weights\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ 14b_64a1c2e3ddb7864e8e05b8d6455d2865.safetensors exists in loras directory\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n⏳ Downloading wan2.1_t2v_14B_bf16.safetensors to ComfyUI/models/diffusion_models\n✅ wan2.1_t2v_14B_bf16.safetensors downloaded to ComfyUI/models/diffusion_models in 14.40s, size: 27253.24MB\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 39, title: Load VAE, class type: VAELoader\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16\nExecuting node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo\nExecuting node 38, title: Load CLIP, class type: CLIPLoader\n[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16\n[ComfyUI] Requested to load WanTEModel\nExecuting node 7, title: CLIP Text Encode (Negative Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 141327.4875 10835.4765625 True\nExecuting node 37, title: Load Diffusion Model, class type: UNETLoader\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type FLOW\nExecuting node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ\nExecuting node 49, title: Load LoRA, class type: LoraLoader\n[ComfyUI] Requested to load WanTEModel\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 141069.4875 10835.4765625 True\nExecuting node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI] Requested to load WAN21\n[ComfyUI] loaded completely 124343.96281542968 27251.406372070312 True\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:21, 6.94s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:16<03:53, 8.35s/it]\n[ComfyUI] 10%|█ | 3/30 [00:25<03:57, 8.81s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:38<04:26, 10.25s/it]\n[ComfyUI] 20%|██ | 6/30 [00:47<02:57, 7.41s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.32s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.77s/it]\n[ComfyUI] 40%|████ | 12/30 [01:17<01:38, 5.45s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.26s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.13s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.05s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:56<00:49, 4.99s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.98s/it]\n[ComfyUI] 83%|████████▎ | 25/30 [02:15<00:20, 4.11s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:25<00:21, 5.26s/it]\n[ComfyUI] 90%|█████████ | 27/30 [02:25<00:12, 4.10s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:35<00:10, 5.46s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.22s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.51s/it]\n[ComfyUI] Requested to load WanVAE\nExecuting node 8, title: VAE Decode, class type: VAEDecode\n[ComfyUI] loaded completely 98169.38668441772 242.02829551696777 True\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Prompt executed in 186.12 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4",
"metrics": {
"predict_time": 203.906330106,
"total_time": 252.24105
},
"output": [
"https://replicate.delivery/xezq/rwGhWqKagypSJBEJAY3NFDt4W33ffhMERfFS2DskbW3mB15oA/R8_Wan_00001.mp4"
],
"started_at": "2025-03-28T01:03:35.434720Z",
"status": "succeeded",
"urls": {
"stream": "https://stream.replicate.com/v1/files/bcwr-hj4qtfclofcaqgpdjjseneqtjmvolc6odzwzpttzqrev3mvki7fq",
"get": "https://api.replicate.com/v1/predictions/g648agez7hrm80cnv92s2c2xzg",
"cancel": "https://api.replicate.com/v1/predictions/g648agez7hrm80cnv92s2c2xzg/cancel"
},
"version": "bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a"
}
Random seed set to: 578857622
2025-03-28T01:03:35Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpxptoopvc/weights url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar
2025-03-28T01:03:38Z | INFO | [ Complete ] dest=/tmp/tmpxptoopvc/weights size="307 MB" total_elapsed=3.163s url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar
Checking inputs
====================================
Checking weights
✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders
✅ 14b_64a1c2e3ddb7864e8e05b8d6455d2865.safetensors exists in loras directory
✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae
⏳ Downloading wan2.1_t2v_14B_bf16.safetensors to ComfyUI/models/diffusion_models
✅ wan2.1_t2v_14B_bf16.safetensors downloaded to ComfyUI/models/diffusion_models in 14.40s, size: 27253.24MB
====================================
Running workflow
[ComfyUI] got prompt
Executing node 39, title: Load VAE, class type: VAELoader
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Executing node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo
Executing node 38, title: Load CLIP, class type: CLIPLoader
[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
[ComfyUI] Requested to load WanTEModel
Executing node 7, title: CLIP Text Encode (Negative Prompt), class type: CLIPTextEncode
[ComfyUI] loaded completely 141327.4875 10835.4765625 True
Executing node 37, title: Load Diffusion Model, class type: UNETLoader
[ComfyUI] model weight dtype torch.float16, manual cast: None
[ComfyUI] model_type FLOW
Executing node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ
Executing node 49, title: Load LoRA, class type: LoraLoader
[ComfyUI] Requested to load WanTEModel
Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode
[ComfyUI] loaded completely 141069.4875 10835.4765625 True
Executing node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3
Executing node 3, title: KSampler, class type: KSampler
[ComfyUI] Requested to load WAN21
[ComfyUI] loaded completely 124343.96281542968 27251.406372070312 True
[ComfyUI]
[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]
[ComfyUI] 3%|▎ | 1/30 [00:06<03:21, 6.94s/it]
[ComfyUI] 7%|▋ | 2/30 [00:16<03:53, 8.35s/it]
[ComfyUI] 10%|█ | 3/30 [00:25<03:57, 8.81s/it]
[ComfyUI] TeaCache: Initialized
[ComfyUI]
[ComfyUI] 13%|█▎ | 4/30 [00:38<04:26, 10.25s/it]
[ComfyUI] 20%|██ | 6/30 [00:47<02:57, 7.41s/it]
[ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.32s/it]
[ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.77s/it]
[ComfyUI] 40%|████ | 12/30 [01:17<01:38, 5.45s/it]
[ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.26s/it]
[ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.13s/it]
[ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.05s/it]
[ComfyUI] 67%|██████▋ | 20/30 [01:56<00:49, 4.99s/it]
[ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it]
[ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.98s/it]
[ComfyUI] 83%|████████▎ | 25/30 [02:15<00:20, 4.11s/it]
[ComfyUI] 87%|████████▋ | 26/30 [02:25<00:21, 5.26s/it]
[ComfyUI] 90%|█████████ | 27/30 [02:25<00:12, 4.10s/it]
[ComfyUI] 93%|█████████▎| 28/30 [02:35<00:10, 5.46s/it]
[ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.22s/it]
[ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.51s/it]
[ComfyUI] Requested to load WanVAE
Executing node 8, title: VAE Decode, class type: VAEDecode
[ComfyUI] loaded completely 98169.38668441772 242.02829551696777 True
Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine
[ComfyUI] Prompt executed in 186.12 seconds
outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}
====================================
R8_Wan_00001.png
R8_Wan_00001.mp4
This model runs on Nvidia H100 GPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
Choose a file from your machine
Hint: you can also drag files onto the input
Random seed set to: 578857622
2025-03-28T01:03:35Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpxptoopvc/weights url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar
2025-03-28T01:03:38Z | INFO | [ Complete ] dest=/tmp/tmpxptoopvc/weights size="307 MB" total_elapsed=3.163s url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar
Checking inputs
====================================
Checking weights
✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders
✅ 14b_64a1c2e3ddb7864e8e05b8d6455d2865.safetensors exists in loras directory
✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae
⏳ Downloading wan2.1_t2v_14B_bf16.safetensors to ComfyUI/models/diffusion_models
✅ wan2.1_t2v_14B_bf16.safetensors downloaded to ComfyUI/models/diffusion_models in 14.40s, size: 27253.24MB
====================================
Running workflow
[ComfyUI] got prompt
Executing node 39, title: Load VAE, class type: VAELoader
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Executing node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo
Executing node 38, title: Load CLIP, class type: CLIPLoader
[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
[ComfyUI] Requested to load WanTEModel
Executing node 7, title: CLIP Text Encode (Negative Prompt), class type: CLIPTextEncode
[ComfyUI] loaded completely 141327.4875 10835.4765625 True
Executing node 37, title: Load Diffusion Model, class type: UNETLoader
[ComfyUI] model weight dtype torch.float16, manual cast: None
[ComfyUI] model_type FLOW
Executing node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ
Executing node 49, title: Load LoRA, class type: LoraLoader
[ComfyUI] Requested to load WanTEModel
Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode
[ComfyUI] loaded completely 141069.4875 10835.4765625 True
Executing node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3
Executing node 3, title: KSampler, class type: KSampler
[ComfyUI] Requested to load WAN21
[ComfyUI] loaded completely 124343.96281542968 27251.406372070312 True
[ComfyUI]
[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]
[ComfyUI] 3%|▎ | 1/30 [00:06<03:21, 6.94s/it]
[ComfyUI] 7%|▋ | 2/30 [00:16<03:53, 8.35s/it]
[ComfyUI] 10%|█ | 3/30 [00:25<03:57, 8.81s/it]
[ComfyUI] TeaCache: Initialized
[ComfyUI]
[ComfyUI] 13%|█▎ | 4/30 [00:38<04:26, 10.25s/it]
[ComfyUI] 20%|██ | 6/30 [00:47<02:57, 7.41s/it]
[ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.32s/it]
[ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.77s/it]
[ComfyUI] 40%|████ | 12/30 [01:17<01:38, 5.45s/it]
[ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.26s/it]
[ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.13s/it]
[ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.05s/it]
[ComfyUI] 67%|██████▋ | 20/30 [01:56<00:49, 4.99s/it]
[ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it]
[ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.98s/it]
[ComfyUI] 83%|████████▎ | 25/30 [02:15<00:20, 4.11s/it]
[ComfyUI] 87%|████████▋ | 26/30 [02:25<00:21, 5.26s/it]
[ComfyUI] 90%|█████████ | 27/30 [02:25<00:12, 4.10s/it]
[ComfyUI] 93%|█████████▎| 28/30 [02:35<00:10, 5.46s/it]
[ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.22s/it]
[ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.51s/it]
[ComfyUI] Requested to load WanVAE
Executing node 8, title: VAE Decode, class type: VAEDecode
[ComfyUI] loaded completely 98169.38668441772 242.02829551696777 True
Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine
[ComfyUI] Prompt executed in 186.12 seconds
outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}
====================================
R8_Wan_00001.png
R8_Wan_00001.mp4