shridharathi
/
van-gogh-vid
Make your videos van gogh-esque
- Public
- 30 runs
-
H100
Prediction
shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2aIDg648agez7hrm80cnv92s2c2xzgStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- GOGH style, painting of a man walking in a field with trees and sunlight
- fast_mode
- Balanced
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "GOGH style, painting of a man walking in a field with trees and sunlight", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/van-gogh-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a", { input: { frames: 81, prompt: "GOGH style, painting of a man walking in a field with trees and sunlight", fast_mode: "Balanced", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/van-gogh-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a", input={ "frames": 81, "prompt": "GOGH style, painting of a man walking in a field with trees and sunlight", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/van-gogh-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a", "input": { "frames": 81, "prompt": "GOGH style, painting of a man walking in a field with trees and sunlight", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-28T01:06:59.341050Z", "created_at": "2025-03-28T01:02:47.100000Z", "data_removed": false, "error": null, "id": "g648agez7hrm80cnv92s2c2xzg", "input": { "frames": 81, "prompt": "GOGH style, painting of a man walking in a field with trees and sunlight", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 578857622\n2025-03-28T01:03:35Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpxptoopvc/weights url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar\n2025-03-28T01:03:38Z | INFO | [ Complete ] dest=/tmp/tmpxptoopvc/weights size=\"307 MB\" total_elapsed=3.163s url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar\nChecking inputs\n====================================\nChecking weights\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ 14b_64a1c2e3ddb7864e8e05b8d6455d2865.safetensors exists in loras directory\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n⏳ Downloading wan2.1_t2v_14B_bf16.safetensors to ComfyUI/models/diffusion_models\n✅ wan2.1_t2v_14B_bf16.safetensors downloaded to ComfyUI/models/diffusion_models in 14.40s, size: 27253.24MB\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 39, title: Load VAE, class type: VAELoader\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16\nExecuting node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo\nExecuting node 38, title: Load CLIP, class type: CLIPLoader\n[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16\n[ComfyUI] Requested to load WanTEModel\nExecuting node 7, title: CLIP Text Encode (Negative Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 141327.4875 10835.4765625 True\nExecuting node 37, title: Load Diffusion Model, class type: UNETLoader\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type FLOW\nExecuting node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ\nExecuting node 49, title: Load LoRA, class type: LoraLoader\n[ComfyUI] Requested to load WanTEModel\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 141069.4875 10835.4765625 True\nExecuting node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI] Requested to load WAN21\n[ComfyUI] loaded completely 124343.96281542968 27251.406372070312 True\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:21, 6.94s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:16<03:53, 8.35s/it]\n[ComfyUI] 10%|█ | 3/30 [00:25<03:57, 8.81s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:38<04:26, 10.25s/it]\n[ComfyUI] 20%|██ | 6/30 [00:47<02:57, 7.41s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.32s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.77s/it]\n[ComfyUI] 40%|████ | 12/30 [01:17<01:38, 5.45s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.26s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.13s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.05s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:56<00:49, 4.99s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.98s/it]\n[ComfyUI] 83%|████████▎ | 25/30 [02:15<00:20, 4.11s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:25<00:21, 5.26s/it]\n[ComfyUI] 90%|█████████ | 27/30 [02:25<00:12, 4.10s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:35<00:10, 5.46s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.22s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.51s/it]\n[ComfyUI] Requested to load WanVAE\nExecuting node 8, title: VAE Decode, class type: VAEDecode\n[ComfyUI] loaded completely 98169.38668441772 242.02829551696777 True\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Prompt executed in 186.12 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 203.906330106, "total_time": 252.24105 }, "output": [ "https://replicate.delivery/xezq/rwGhWqKagypSJBEJAY3NFDt4W33ffhMERfFS2DskbW3mB15oA/R8_Wan_00001.mp4" ], "started_at": "2025-03-28T01:03:35.434720Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-hj4qtfclofcaqgpdjjseneqtjmvolc6odzwzpttzqrev3mvki7fq", "get": "https://api.replicate.com/v1/predictions/g648agez7hrm80cnv92s2c2xzg", "cancel": "https://api.replicate.com/v1/predictions/g648agez7hrm80cnv92s2c2xzg/cancel" }, "version": "bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a" }
Generated inRandom seed set to: 578857622 2025-03-28T01:03:35Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpxptoopvc/weights url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar 2025-03-28T01:03:38Z | INFO | [ Complete ] dest=/tmp/tmpxptoopvc/weights size="307 MB" total_elapsed=3.163s url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar Checking inputs ==================================== Checking weights ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ 14b_64a1c2e3ddb7864e8e05b8d6455d2865.safetensors exists in loras directory ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ⏳ Downloading wan2.1_t2v_14B_bf16.safetensors to ComfyUI/models/diffusion_models ✅ wan2.1_t2v_14B_bf16.safetensors downloaded to ComfyUI/models/diffusion_models in 14.40s, size: 27253.24MB ==================================== Running workflow [ComfyUI] got prompt Executing node 39, title: Load VAE, class type: VAELoader [ComfyUI] Using pytorch attention in VAE [ComfyUI] Using pytorch attention in VAE [ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 Executing node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo Executing node 38, title: Load CLIP, class type: CLIPLoader [ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 [ComfyUI] Requested to load WanTEModel Executing node 7, title: CLIP Text Encode (Negative Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 141327.4875 10835.4765625 True Executing node 37, title: Load Diffusion Model, class type: UNETLoader [ComfyUI] model weight dtype torch.float16, manual cast: None [ComfyUI] model_type FLOW Executing node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ Executing node 49, title: Load LoRA, class type: LoraLoader [ComfyUI] Requested to load WanTEModel Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 141069.4875 10835.4765625 True Executing node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3 Executing node 3, title: KSampler, class type: KSampler [ComfyUI] Requested to load WAN21 [ComfyUI] loaded completely 124343.96281542968 27251.406372070312 True [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:06<03:21, 6.94s/it] [ComfyUI] 7%|▋ | 2/30 [00:16<03:53, 8.35s/it] [ComfyUI] 10%|█ | 3/30 [00:25<03:57, 8.81s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:38<04:26, 10.25s/it] [ComfyUI] 20%|██ | 6/30 [00:47<02:57, 7.41s/it] [ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.32s/it] [ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.77s/it] [ComfyUI] 40%|████ | 12/30 [01:17<01:38, 5.45s/it] [ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.26s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.13s/it] [ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.05s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:56<00:49, 4.99s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it] [ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.98s/it] [ComfyUI] 83%|████████▎ | 25/30 [02:15<00:20, 4.11s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:25<00:21, 5.26s/it] [ComfyUI] 90%|█████████ | 27/30 [02:25<00:12, 4.10s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:35<00:10, 5.46s/it] [ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.22s/it] [ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.51s/it] [ComfyUI] Requested to load WanVAE Executing node 8, title: VAE Decode, class type: VAEDecode [ComfyUI] loaded completely 98169.38668441772 242.02829551696777 True Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Prompt executed in 186.12 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Prediction
shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2aIDkx0svfdqwxrmc0cnv99rpgg3wwStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- GOGH style, painting, a giraffe walking through the sun-drenched streets of san francisco
- fast_mode
- Balanced
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "GOGH style, painting, a giraffe walking through the sun-drenched streets of san francisco", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/van-gogh-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a", { input: { frames: 81, prompt: "GOGH style, painting, a giraffe walking through the sun-drenched streets of san francisco", fast_mode: "Balanced", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/van-gogh-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a", input={ "frames": 81, "prompt": "GOGH style, painting, a giraffe walking through the sun-drenched streets of san francisco", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/van-gogh-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a", "input": { "frames": 81, "prompt": "GOGH style, painting, a giraffe walking through the sun-drenched streets of san francisco", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-28T01:22:04.360855Z", "created_at": "2025-03-28T01:17:54.535000Z", "data_removed": false, "error": null, "id": "kx0svfdqwxrmc0cnv99rpgg3ww", "input": { "frames": 81, "prompt": "GOGH style, painting, a giraffe walking through the sun-drenched streets of san francisco", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 780320570\n2025-03-28T01:18:30Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpvdjdwpnn/weights url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar\n2025-03-28T01:18:33Z | INFO | [ Complete ] dest=/tmp/tmpvdjdwpnn/weights size=\"307 MB\" total_elapsed=2.739s url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar\nChecking inputs\n====================================\nChecking weights\n⏳ Downloading wan2.1_t2v_14B_bf16.safetensors to ComfyUI/models/diffusion_models\n✅ wan2.1_t2v_14B_bf16.safetensors downloaded to ComfyUI/models/diffusion_models in 21.04s, size: 27253.24MB\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ 14b_64a1c2e3ddb7864e8e05b8d6455d2865.safetensors exists in loras directory\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo\nExecuting node 7, title: CLIP Text Encode (Negative Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 132525.11545448302 10835.4765625 True\nExecuting node 37, title: Load Diffusion Model, class type: UNETLoader\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type FLOW\nExecuting node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ\nExecuting node 49, title: Load LoRA, class type: LoraLoader\n[ComfyUI] Requested to load WanTEModel\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 132525.11545448302 10835.4765625 True\nExecuting node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI] Requested to load WAN21\n[ComfyUI] loaded completely 115799.59076991271 27251.406372070312 True\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:08, 6.49s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:15<03:47, 8.12s/it]\n[ComfyUI] 10%|█ | 3/30 [00:25<03:53, 8.65s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:37<04:23, 10.12s/it]\n[ComfyUI] 20%|██ | 6/30 [00:47<02:56, 7.36s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.29s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:06<01:55, 5.77s/it]\n[ComfyUI] 40%|████ | 12/30 [01:16<01:38, 5.45s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.27s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:36<01:12, 5.15s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:45<01:00, 5.06s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:55<00:50, 5.00s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.97s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.94s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:24<00:19, 4.92s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:34<00:09, 4.91s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 4.91s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.48s/it]\n[ComfyUI] Prompt executed in 189.95 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 213.974078084, "total_time": 249.825855 }, "output": [ "https://replicate.delivery/xezq/rCzhfcqW5jWlBC9wbqoM5h35bdRApWt86FLaBbEzjKNeu6cUA/R8_Wan_00001.mp4" ], "started_at": "2025-03-28T01:18:30.386777Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-vqdwuesiadjvsyf6ieyqtbuesk3lerz2jybqs7afhu2ttk7cnm7q", "get": "https://api.replicate.com/v1/predictions/kx0svfdqwxrmc0cnv99rpgg3ww", "cancel": "https://api.replicate.com/v1/predictions/kx0svfdqwxrmc0cnv99rpgg3ww/cancel" }, "version": "bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a" }
Generated inRandom seed set to: 780320570 2025-03-28T01:18:30Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpvdjdwpnn/weights url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar 2025-03-28T01:18:33Z | INFO | [ Complete ] dest=/tmp/tmpvdjdwpnn/weights size="307 MB" total_elapsed=2.739s url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar Checking inputs ==================================== Checking weights ⏳ Downloading wan2.1_t2v_14B_bf16.safetensors to ComfyUI/models/diffusion_models ✅ wan2.1_t2v_14B_bf16.safetensors downloaded to ComfyUI/models/diffusion_models in 21.04s, size: 27253.24MB ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ 14b_64a1c2e3ddb7864e8e05b8d6455d2865.safetensors exists in loras directory ==================================== Running workflow [ComfyUI] got prompt Executing node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo Executing node 7, title: CLIP Text Encode (Negative Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 132525.11545448302 10835.4765625 True Executing node 37, title: Load Diffusion Model, class type: UNETLoader [ComfyUI] model weight dtype torch.float16, manual cast: None [ComfyUI] model_type FLOW Executing node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ Executing node 49, title: Load LoRA, class type: LoraLoader [ComfyUI] Requested to load WanTEModel Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 132525.11545448302 10835.4765625 True Executing node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3 Executing node 3, title: KSampler, class type: KSampler [ComfyUI] Requested to load WAN21 [ComfyUI] loaded completely 115799.59076991271 27251.406372070312 True [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:06<03:08, 6.49s/it] [ComfyUI] 7%|▋ | 2/30 [00:15<03:47, 8.12s/it] [ComfyUI] 10%|█ | 3/30 [00:25<03:53, 8.65s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:37<04:23, 10.12s/it] [ComfyUI] 20%|██ | 6/30 [00:47<02:56, 7.36s/it] [ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.29s/it] [ComfyUI] 33%|███▎ | 10/30 [01:06<01:55, 5.77s/it] [ComfyUI] 40%|████ | 12/30 [01:16<01:38, 5.45s/it] [ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.27s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:36<01:12, 5.15s/it] [ComfyUI] 60%|██████ | 18/30 [01:45<01:00, 5.06s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:55<00:50, 5.00s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.97s/it] [ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.94s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:24<00:19, 4.92s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:34<00:09, 4.91s/it] [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 4.91s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.48s/it] [ComfyUI] Prompt executed in 189.95 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Prediction
shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2aIDanyd763qg9rmc0cnv9k8y3bgkwStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- GOGH style, painting of a man paragliding in the green swiss alps
- fast_mode
- Balanced
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "GOGH style, painting of a man paragliding in the green swiss alps ", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/van-gogh-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a", { input: { frames: 81, prompt: "GOGH style, painting of a man paragliding in the green swiss alps ", fast_mode: "Balanced", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/van-gogh-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a", input={ "frames": 81, "prompt": "GOGH style, painting of a man paragliding in the green swiss alps ", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/van-gogh-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/van-gogh-vid:bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a", "input": { "frames": 81, "prompt": "GOGH style, painting of a man paragliding in the green swiss alps ", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-28T01:42:35.883587Z", "created_at": "2025-03-28T01:38:23.234000Z", "data_removed": false, "error": null, "id": "anyd763qg9rmc0cnv9k8y3bgkw", "input": { "frames": 81, "prompt": "GOGH style, painting of a man paragliding in the green swiss alps ", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 3496058792\n2025-03-28T01:39:11Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmp18h_twdd/weights url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar\n2025-03-28T01:39:15Z | INFO | [ Complete ] dest=/tmp/tmp18h_twdd/weights size=\"307 MB\" total_elapsed=3.447s url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar\nChecking inputs\n====================================\nChecking weights\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ 14b_64a1c2e3ddb7864e8e05b8d6455d2865.safetensors exists in loras directory\n⏳ Downloading wan2.1_t2v_14B_bf16.safetensors to ComfyUI/models/diffusion_models\n✅ wan2.1_t2v_14B_bf16.safetensors downloaded to ComfyUI/models/diffusion_models in 17.26s, size: 27253.24MB\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 39, title: Load VAE, class type: VAELoader\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16\nExecuting node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo\nExecuting node 38, title: Load CLIP, class type: CLIPLoader\n[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16\n[ComfyUI] Requested to load WanTEModel\nExecuting node 7, title: CLIP Text Encode (Negative Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 141327.4875 10835.4765625 True\nExecuting node 37, title: Load Diffusion Model, class type: UNETLoader\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type FLOW\nExecuting node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ\nExecuting node 49, title: Load LoRA, class type: LoraLoader\n[ComfyUI] Requested to load WanTEModel\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 141069.4875 10835.4765625 True\nExecuting node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI] Requested to load WAN21\n[ComfyUI] loaded completely 124343.96281542968 27251.406372070312 True\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:20, 6.91s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:16<03:53, 8.33s/it]\n[ComfyUI] 10%|█ | 3/30 [00:25<03:57, 8.80s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:37<04:25, 10.22s/it]\n[ComfyUI] 20%|██ | 6/30 [00:47<02:57, 7.38s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.29s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.76s/it]\n[ComfyUI] 40%|████ | 12/30 [01:16<01:37, 5.44s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:26<01:23, 5.25s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.13s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.04s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:55<00:49, 4.99s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.92s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:25<00:19, 4.91s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:34<00:09, 4.90s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 4.89s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.49s/it]\n[ComfyUI] Requested to load WanVAE\nExecuting node 8, title: VAE Decode, class type: VAEDecode\n[ComfyUI] loaded completely 98169.38668441772 242.02829551696777 True\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Prompt executed in 182.99 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 203.91798091, "total_time": 252.649587 }, "output": [ "https://replicate.delivery/xezq/fADEMVgtFRTRZaK3uWxSmO5iQHBTxkFT1ceR2ZVosAULC7cUA/R8_Wan_00001.mp4" ], "started_at": "2025-03-28T01:39:11.965606Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-4p3rgd3zlfxgzscx3phoiypmciqa36nxdu73nozmkdz2rqkn6vfa", "get": "https://api.replicate.com/v1/predictions/anyd763qg9rmc0cnv9k8y3bgkw", "cancel": "https://api.replicate.com/v1/predictions/anyd763qg9rmc0cnv9k8y3bgkw/cancel" }, "version": "bee70f3c8f0db29784b12b5129186eaa9607104308f592c5f07274cac7acbd2a" }
Generated inRandom seed set to: 3496058792 2025-03-28T01:39:11Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmp18h_twdd/weights url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar 2025-03-28T01:39:15Z | INFO | [ Complete ] dest=/tmp/tmp18h_twdd/weights size="307 MB" total_elapsed=3.447s url=https://replicate.delivery/xezq/KgMA5f1XAqVuW672nNaxVDLb7KfaETMfFCnMHzCJ1R3hqy5oA/trained_model.tar Checking inputs ==================================== Checking weights ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ 14b_64a1c2e3ddb7864e8e05b8d6455d2865.safetensors exists in loras directory ⏳ Downloading wan2.1_t2v_14B_bf16.safetensors to ComfyUI/models/diffusion_models ✅ wan2.1_t2v_14B_bf16.safetensors downloaded to ComfyUI/models/diffusion_models in 17.26s, size: 27253.24MB ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 39, title: Load VAE, class type: VAELoader [ComfyUI] Using pytorch attention in VAE [ComfyUI] Using pytorch attention in VAE [ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 Executing node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo Executing node 38, title: Load CLIP, class type: CLIPLoader [ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 [ComfyUI] Requested to load WanTEModel Executing node 7, title: CLIP Text Encode (Negative Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 141327.4875 10835.4765625 True Executing node 37, title: Load Diffusion Model, class type: UNETLoader [ComfyUI] model weight dtype torch.float16, manual cast: None [ComfyUI] model_type FLOW Executing node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ Executing node 49, title: Load LoRA, class type: LoraLoader [ComfyUI] Requested to load WanTEModel Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 141069.4875 10835.4765625 True Executing node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3 Executing node 3, title: KSampler, class type: KSampler [ComfyUI] Requested to load WAN21 [ComfyUI] loaded completely 124343.96281542968 27251.406372070312 True [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:06<03:20, 6.91s/it] [ComfyUI] 7%|▋ | 2/30 [00:16<03:53, 8.33s/it] [ComfyUI] 10%|█ | 3/30 [00:25<03:57, 8.80s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:37<04:25, 10.22s/it] [ComfyUI] 20%|██ | 6/30 [00:47<02:57, 7.38s/it] [ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.29s/it] [ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.76s/it] [ComfyUI] 40%|████ | 12/30 [01:16<01:37, 5.44s/it] [ComfyUI] 47%|████▋ | 14/30 [01:26<01:23, 5.25s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.13s/it] [ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.04s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:55<00:49, 4.99s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it] [ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.92s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:25<00:19, 4.91s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:34<00:09, 4.90s/it] [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 4.89s/it] [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.49s/it] [ComfyUI] Requested to load WanVAE Executing node 8, title: VAE Decode, class type: VAEDecode [ComfyUI] loaded completely 98169.38668441772 242.02829551696777 True Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Prompt executed in 182.99 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Want to make some of these yourself?
Run this model