shridharathi
/
motion-blur-vid
- Public
- 20 runs
-
H100
Prediction
shridharathi/motion-blur-vid:7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15ID9n8af4ay15rma0cnv599qccd34StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- BLUR style, motion blur, a car zooming on the highway at night
- fast_mode
- Balanced
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "BLUR style, motion blur, a car zooming on the highway at night", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/motion-blur-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/motion-blur-vid:7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15", { input: { frames: 81, prompt: "BLUR style, motion blur, a car zooming on the highway at night", fast_mode: "Balanced", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/motion-blur-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/motion-blur-vid:7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15", input={ "frames": 81, "prompt": "BLUR style, motion blur, a car zooming on the highway at night", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/motion-blur-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/motion-blur-vid:7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15", "input": { "frames": 81, "prompt": "BLUR style, motion blur, a car zooming on the highway at night", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-27T20:39:36.198352Z", "created_at": "2025-03-27T20:36:48.777000Z", "data_removed": false, "error": null, "id": "9n8af4ay15rma0cnv599qccd34", "input": { "frames": 81, "prompt": "BLUR style, motion blur, a car zooming on the highway at night", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 3399017871\n✅ 14b_b10322233ae37dabb3c31bcf57a0abdf.safetensors already cached\nChecking inputs\n====================================\nChecking weights\n✅ 14b_b10322233ae37dabb3c31bcf57a0abdf.safetensors exists in loras directory\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] Resetting TeaCache state\n[ComfyUI]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:11, 6.59s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:15<03:49, 8.20s/it]\n[ComfyUI] 10%|█ | 3/30 [00:25<03:55, 8.71s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:37<04:24, 10.17s/it]\n[ComfyUI] 20%|██ | 6/30 [00:47<02:56, 7.35s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.28s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:06<01:54, 5.75s/it]\n[ComfyUI] 40%|████ | 12/30 [01:16<01:37, 5.43s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:26<01:23, 5.24s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.12s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:45<01:00, 5.04s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:55<00:49, 4.98s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.96s/it]\n[ComfyUI] 83%|████████▎ | 25/30 [02:15<00:20, 4.09s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:24<00:20, 5.23s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:34<00:10, 5.09s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.01s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.48s/it]\n[ComfyUI] Prompt executed in 167.23 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 167.408915704, "total_time": 167.421352 }, "output": [ "https://replicate.delivery/xezq/W9wfvJ27ciQBH67bKL2c9skkwtffQK5HtRRLeyMCrrthYazRB/R8_Wan_00001.mp4" ], "started_at": "2025-03-27T20:36:48.789437Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-chfxagm742nr2ty2zi7qos3mlmnxvuu2hnajvgdseidhxdf4ttfq", "get": "https://api.replicate.com/v1/predictions/9n8af4ay15rma0cnv599qccd34", "cancel": "https://api.replicate.com/v1/predictions/9n8af4ay15rma0cnv599qccd34/cancel" }, "version": "7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15" }
Generated inRandom seed set to: 3399017871 ✅ 14b_b10322233ae37dabb3c31bcf57a0abdf.safetensors already cached Checking inputs ==================================== Checking weights ✅ 14b_b10322233ae37dabb3c31bcf57a0abdf.safetensors exists in loras directory ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode Executing node 3, title: KSampler, class type: KSampler [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:06<03:11, 6.59s/it] [ComfyUI] 7%|▋ | 2/30 [00:15<03:49, 8.20s/it] [ComfyUI] 10%|█ | 3/30 [00:25<03:55, 8.71s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:37<04:24, 10.17s/it] [ComfyUI] 20%|██ | 6/30 [00:47<02:56, 7.35s/it] [ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.28s/it] [ComfyUI] 33%|███▎ | 10/30 [01:06<01:54, 5.75s/it] [ComfyUI] 40%|████ | 12/30 [01:16<01:37, 5.43s/it] [ComfyUI] 47%|████▋ | 14/30 [01:26<01:23, 5.24s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.12s/it] [ComfyUI] 60%|██████ | 18/30 [01:45<01:00, 5.04s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:55<00:49, 4.98s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it] [ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.96s/it] [ComfyUI] 83%|████████▎ | 25/30 [02:15<00:20, 4.09s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:24<00:20, 5.23s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:34<00:10, 5.09s/it] [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.01s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.48s/it] [ComfyUI] Prompt executed in 167.23 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Prediction
shridharathi/motion-blur-vid:7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15IDc433gap55hrm80cnv5avdhsjmwStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- BLUR style, motion blur, a biker
- fast_mode
- Balanced
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "BLUR style, motion blur, a biker", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/motion-blur-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/motion-blur-vid:7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15", { input: { frames: 81, prompt: "BLUR style, motion blur, a biker", fast_mode: "Balanced", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/motion-blur-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/motion-blur-vid:7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15", input={ "frames": 81, "prompt": "BLUR style, motion blur, a biker", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/motion-blur-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/motion-blur-vid:7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15", "input": { "frames": 81, "prompt": "BLUR style, motion blur, a biker", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-27T20:43:28.342893Z", "created_at": "2025-03-27T20:40:31.788000Z", "data_removed": false, "error": null, "id": "c433gap55hrm80cnv5avdhsjmw", "input": { "frames": 81, "prompt": "BLUR style, motion blur, a biker", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 3927730274\n2025-03-27T20:40:31Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmp28p91pyr/weights url=https://replicate.delivery/xezq/n0RvPffWzRpWkE0wB6fJqWPKtujMUdQnLlQQz4ibXcgM9s5oA/trained_model.tar\n2025-03-27T20:40:34Z | INFO | [ Complete ] dest=/tmp/tmp28p91pyr/weights size=\"307 MB\" total_elapsed=2.336s url=https://replicate.delivery/xezq/n0RvPffWzRpWkE0wB6fJqWPKtujMUdQnLlQQz4ibXcgM9s5oA/trained_model.tar\nChecking inputs\n====================================\nChecking weights\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ 14b_b10322233ae37dabb3c31bcf57a0abdf.safetensors exists in loras directory\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo\nExecuting node 49, title: Load LoRA, class type: LoraLoader\n[ComfyUI] Requested to load WanTEModel\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 113273.66620521546 10835.4765625 True\nExecuting node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI] Requested to load WAN21\n[ComfyUI] loaded completely 123801.91498866271 27251.406372070312 True\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] Resetting TeaCache state\n[ComfyUI]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:07, 6.45s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:15<03:45, 8.05s/it]\n[ComfyUI] 10%|█ | 3/30 [00:24<03:51, 8.57s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:37<04:20, 10.02s/it]\n[ComfyUI] 20%|██ | 6/30 [00:46<02:54, 7.25s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:56<02:16, 6.19s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:05<01:53, 5.66s/it]\n[ComfyUI] 40%|████ | 12/30 [01:15<01:36, 5.35s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:25<01:22, 5.17s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:34<01:10, 5.05s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:44<00:59, 4.97s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:53<00:49, 4.92s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:03<00:39, 4.88s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:12<00:29, 4.85s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:22<00:19, 4.84s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:32<00:09, 4.83s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:41<00:00, 4.82s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:41<00:00, 5.39s/it]\n[ComfyUI] Prompt executed in 174.00 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 176.545920851, "total_time": 176.554893 }, "output": [ "https://replicate.delivery/xezq/CSYVt5bJv54ZFNUhHqNwCgdpcPfnnYQiZMC6ljk9BYF4UbOKA/R8_Wan_00001.mp4" ], "started_at": "2025-03-27T20:40:31.796972Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-yjfcuop5abm76zy6hg2l4763rqqryfs7a2mlyylfyrggn465wtwa", "get": "https://api.replicate.com/v1/predictions/c433gap55hrm80cnv5avdhsjmw", "cancel": "https://api.replicate.com/v1/predictions/c433gap55hrm80cnv5avdhsjmw/cancel" }, "version": "7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15" }
Generated inRandom seed set to: 3927730274 2025-03-27T20:40:31Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmp28p91pyr/weights url=https://replicate.delivery/xezq/n0RvPffWzRpWkE0wB6fJqWPKtujMUdQnLlQQz4ibXcgM9s5oA/trained_model.tar 2025-03-27T20:40:34Z | INFO | [ Complete ] dest=/tmp/tmp28p91pyr/weights size="307 MB" total_elapsed=2.336s url=https://replicate.delivery/xezq/n0RvPffWzRpWkE0wB6fJqWPKtujMUdQnLlQQz4ibXcgM9s5oA/trained_model.tar Checking inputs ==================================== Checking weights ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ 14b_b10322233ae37dabb3c31bcf57a0abdf.safetensors exists in loras directory ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo Executing node 49, title: Load LoRA, class type: LoraLoader [ComfyUI] Requested to load WanTEModel Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 113273.66620521546 10835.4765625 True Executing node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3 Executing node 3, title: KSampler, class type: KSampler [ComfyUI] Requested to load WAN21 [ComfyUI] loaded completely 123801.91498866271 27251.406372070312 True [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:06<03:07, 6.45s/it] [ComfyUI] 7%|▋ | 2/30 [00:15<03:45, 8.05s/it] [ComfyUI] 10%|█ | 3/30 [00:24<03:51, 8.57s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:37<04:20, 10.02s/it] [ComfyUI] 20%|██ | 6/30 [00:46<02:54, 7.25s/it] [ComfyUI] 27%|██▋ | 8/30 [00:56<02:16, 6.19s/it] [ComfyUI] 33%|███▎ | 10/30 [01:05<01:53, 5.66s/it] [ComfyUI] 40%|████ | 12/30 [01:15<01:36, 5.35s/it] [ComfyUI] 47%|████▋ | 14/30 [01:25<01:22, 5.17s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:34<01:10, 5.05s/it] [ComfyUI] 60%|██████ | 18/30 [01:44<00:59, 4.97s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:53<00:49, 4.92s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:03<00:39, 4.88s/it] [ComfyUI] 80%|████████ | 24/30 [02:12<00:29, 4.85s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:22<00:19, 4.84s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:32<00:09, 4.83s/it] [ComfyUI] 100%|██████████| 30/30 [02:41<00:00, 4.82s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:41<00:00, 5.39s/it] [ComfyUI] Prompt executed in 174.00 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Prediction
shridharathi/motion-blur-vid:7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15IDkbq6gta231rme0cnv5crqpjfdrStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- BLUR style, motion blur, a girl running through tall grass fields at night, neon lights
- fast_mode
- Balanced
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "BLUR style, motion blur, a girl running through tall grass fields at night, neon lights", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/motion-blur-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/motion-blur-vid:7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15", { input: { frames: 81, prompt: "BLUR style, motion blur, a girl running through tall grass fields at night, neon lights", fast_mode: "Balanced", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/motion-blur-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/motion-blur-vid:7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15", input={ "frames": 81, "prompt": "BLUR style, motion blur, a girl running through tall grass fields at night, neon lights", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/motion-blur-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/motion-blur-vid:7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15", "input": { "frames": 81, "prompt": "BLUR style, motion blur, a girl running through tall grass fields at night, neon lights", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-27T20:47:05.695542Z", "created_at": "2025-03-27T20:44:20.376000Z", "data_removed": false, "error": null, "id": "kbq6gta231rme0cnv5crqpjfdr", "input": { "frames": 81, "prompt": "BLUR style, motion blur, a girl running through tall grass fields at night, neon lights", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 3113453718\n✅ 14b_b10322233ae37dabb3c31bcf57a0abdf.safetensors already cached\nChecking inputs\n====================================\nChecking weights\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ 14b_b10322233ae37dabb3c31bcf57a0abdf.safetensors exists in loras directory\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] Resetting TeaCache state\n[ComfyUI]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:07, 6.45s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:15<03:45, 8.07s/it]\n[ComfyUI] 10%|█ | 3/30 [00:24<03:52, 8.61s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:37<04:21, 10.04s/it]\n[ComfyUI] 20%|██ | 6/30 [00:46<02:54, 7.27s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:56<02:16, 6.20s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:06<01:53, 5.68s/it]\n[ComfyUI] 40%|████ | 12/30 [01:15<01:36, 5.38s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:25<01:22, 5.18s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:34<01:10, 5.06s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:44<00:59, 4.99s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:54<00:49, 4.94s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:03<00:39, 4.89s/it]\n[ComfyUI] 77%|███████▋ | 23/30 [02:03<00:28, 4.03s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:13<00:30, 5.16s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:23<00:20, 5.03s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:32<00:09, 4.96s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:42<00:00, 4.91s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:42<00:00, 5.41s/it]\n[ComfyUI] Prompt executed in 165.14 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 165.31205004, "total_time": 165.319542 }, "output": [ "https://replicate.delivery/xezq/vosH9DwRgmohO52FrEHYnfrFASXj3SS1fcjjKqRHf8FTat5oA/R8_Wan_00001.mp4" ], "started_at": "2025-03-27T20:44:20.383492Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-s5lp4nxf57qn7vmoz3dyf4emyqjsn6inwk6mcfoqqv6quyix7fca", "get": "https://api.replicate.com/v1/predictions/kbq6gta231rme0cnv5crqpjfdr", "cancel": "https://api.replicate.com/v1/predictions/kbq6gta231rme0cnv5crqpjfdr/cancel" }, "version": "7549a1cfbc75152feec6bae45d51c7d6006203999234953d2443c6802c67ad15" }
Generated inRandom seed set to: 3113453718 ✅ 14b_b10322233ae37dabb3c31bcf57a0abdf.safetensors already cached Checking inputs ==================================== Checking weights ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ 14b_b10322233ae37dabb3c31bcf57a0abdf.safetensors exists in loras directory ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode Executing node 3, title: KSampler, class type: KSampler [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:06<03:07, 6.45s/it] [ComfyUI] 7%|▋ | 2/30 [00:15<03:45, 8.07s/it] [ComfyUI] 10%|█ | 3/30 [00:24<03:52, 8.61s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:37<04:21, 10.04s/it] [ComfyUI] 20%|██ | 6/30 [00:46<02:54, 7.27s/it] [ComfyUI] 27%|██▋ | 8/30 [00:56<02:16, 6.20s/it] [ComfyUI] 33%|███▎ | 10/30 [01:06<01:53, 5.68s/it] [ComfyUI] 40%|████ | 12/30 [01:15<01:36, 5.38s/it] [ComfyUI] 47%|████▋ | 14/30 [01:25<01:22, 5.18s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:34<01:10, 5.06s/it] [ComfyUI] 60%|██████ | 18/30 [01:44<00:59, 4.99s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:54<00:49, 4.94s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:03<00:39, 4.89s/it] [ComfyUI] 77%|███████▋ | 23/30 [02:03<00:28, 4.03s/it] [ComfyUI] 80%|████████ | 24/30 [02:13<00:30, 5.16s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:23<00:20, 5.03s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:32<00:09, 4.96s/it] [ComfyUI] 100%|██████████| 30/30 [02:42<00:00, 4.91s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:42<00:00, 5.41s/it] [ComfyUI] Prompt executed in 165.14 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Want to make some of these yourself?
Run this model