shridharathi
/
ghibli-vid
Make a video of anything in Studio Ghibli style
- Public
- 91 runs
-
H100
Prediction
shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0IDn5s2qschnxrma0cnv56s32pqbmStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- GHIBLI style, anime, a girl running through new york city
- fast_mode
- Balanced
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "GHIBLI style, anime, a girl running through new york city", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", { input: { frames: 81, prompt: "GHIBLI style, anime, a girl running through new york city", fast_mode: "Balanced", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", input={ "frames": 81, "prompt": "GHIBLI style, anime, a girl running through new york city", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", "input": { "frames": 81, "prompt": "GHIBLI style, anime, a girl running through new york city", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-27T20:34:22.942783Z", "created_at": "2025-03-27T20:31:34.319000Z", "data_removed": false, "error": null, "id": "n5s2qschnxrma0cnv56s32pqbm", "input": { "frames": 81, "prompt": "GHIBLI style, anime, a girl running through new york city", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 874149996\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors already cached\nChecking inputs\n====================================\nChecking weights\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] Resetting TeaCache state\n[ComfyUI]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:12, 6.62s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:16<03:51, 8.28s/it]\n[ComfyUI] 10%|█ | 3/30 [00:25<03:57, 8.80s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:37<04:26, 10.26s/it]\n[ComfyUI] 20%|██ | 6/30 [00:47<02:58, 7.42s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:57<02:19, 6.34s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.80s/it]\n[ComfyUI] 40%|████ | 12/30 [01:17<01:38, 5.48s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:27<01:24, 5.29s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:36<01:12, 5.17s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.08s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:56<00:50, 5.03s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:06<00:39, 4.99s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:16<00:29, 4.97s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:25<00:19, 4.95s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:35<00:09, 4.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 4.93s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.52s/it]\n[ComfyUI] Prompt executed in 168.43 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 168.615737712, "total_time": 168.623783 }, "output": [ "https://replicate.delivery/xezq/4T0Ff7e3Yvjy30HeYRQjqWMG19epscJK6DEdFQ5LYTW5EazRB/R8_Wan_00001.mp4" ], "started_at": "2025-03-27T20:31:34.327045Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-o2s47uywl3fvj5h7ewud4q4ckzdsabnncpthh33qdonbw6cxo75q", "get": "https://api.replicate.com/v1/predictions/n5s2qschnxrma0cnv56s32pqbm", "cancel": "https://api.replicate.com/v1/predictions/n5s2qschnxrma0cnv56s32pqbm/cancel" }, "version": "e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0" }
Generated inRandom seed set to: 874149996 ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors already cached Checking inputs ==================================== Checking weights ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode Executing node 3, title: KSampler, class type: KSampler [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:06<03:12, 6.62s/it] [ComfyUI] 7%|▋ | 2/30 [00:16<03:51, 8.28s/it] [ComfyUI] 10%|█ | 3/30 [00:25<03:57, 8.80s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:37<04:26, 10.26s/it] [ComfyUI] 20%|██ | 6/30 [00:47<02:58, 7.42s/it] [ComfyUI] 27%|██▋ | 8/30 [00:57<02:19, 6.34s/it] [ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.80s/it] [ComfyUI] 40%|████ | 12/30 [01:17<01:38, 5.48s/it] [ComfyUI] 47%|████▋ | 14/30 [01:27<01:24, 5.29s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:36<01:12, 5.17s/it] [ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.08s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:56<00:50, 5.03s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:06<00:39, 4.99s/it] [ComfyUI] 80%|████████ | 24/30 [02:16<00:29, 4.97s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:25<00:19, 4.95s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:35<00:09, 4.94s/it] [ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 4.93s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.52s/it] [ComfyUI] Prompt executed in 168.43 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Prediction
shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0ID8wzxhga0qsrmc0cnv5a9evkkd0StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- GHIBLI style, anime, a tiger steering a small sailboat in the ocean at sunset
- fast_mode
- Balanced
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "GHIBLI style, anime, a tiger steering a small sailboat in the ocean at sunset", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", { input: { frames: 81, prompt: "GHIBLI style, anime, a tiger steering a small sailboat in the ocean at sunset", fast_mode: "Balanced", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", input={ "frames": 81, "prompt": "GHIBLI style, anime, a tiger steering a small sailboat in the ocean at sunset", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", "input": { "frames": 81, "prompt": "GHIBLI style, anime, a tiger steering a small sailboat in the ocean at sunset", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-27T20:41:41.485498Z", "created_at": "2025-03-27T20:38:52.350000Z", "data_removed": false, "error": null, "id": "8wzxhga0qsrmc0cnv5a9evkkd0", "input": { "frames": 81, "prompt": "GHIBLI style, anime, a tiger steering a small sailboat in the ocean at sunset", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 2130517410\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors already cached\nChecking inputs\n====================================\nChecking weights\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] Resetting TeaCache state\n[ComfyUI]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:12, 6.64s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:16<03:51, 8.27s/it]\n[ComfyUI] 10%|█ | 3/30 [00:25<03:57, 8.80s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:38<04:27, 10.28s/it]\n[ComfyUI] 20%|██ | 6/30 [00:47<02:58, 7.45s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:57<02:19, 6.36s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:07<01:56, 5.82s/it]\n[ComfyUI] 40%|████ | 12/30 [01:17<01:39, 5.50s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:27<01:24, 5.31s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:37<01:12, 5.19s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:47<01:01, 5.11s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:56<00:50, 5.05s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:06<00:40, 5.01s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:16<00:29, 4.98s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:26<00:19, 4.96s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:36<00:09, 4.95s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:46<00:00, 4.94s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:46<00:00, 5.54s/it]\n[ComfyUI] Prompt executed in 168.96 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 169.128360108, "total_time": 169.135498 }, "output": [ "https://replicate.delivery/xezq/FVbSoquIADaDIRIyV6mN622Yz66DJ8dtXoLqEKXmtLSBqNHF/R8_Wan_00001.mp4" ], "started_at": "2025-03-27T20:38:52.357138Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-jwtstixkafbepasfya66xwxjzuktfjuv5m47afhg5wz2lsv3gxua", "get": "https://api.replicate.com/v1/predictions/8wzxhga0qsrmc0cnv5a9evkkd0", "cancel": "https://api.replicate.com/v1/predictions/8wzxhga0qsrmc0cnv5a9evkkd0/cancel" }, "version": "e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0" }
Generated inRandom seed set to: 2130517410 ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors already cached Checking inputs ==================================== Checking weights ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode Executing node 3, title: KSampler, class type: KSampler [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:06<03:12, 6.64s/it] [ComfyUI] 7%|▋ | 2/30 [00:16<03:51, 8.27s/it] [ComfyUI] 10%|█ | 3/30 [00:25<03:57, 8.80s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:38<04:27, 10.28s/it] [ComfyUI] 20%|██ | 6/30 [00:47<02:58, 7.45s/it] [ComfyUI] 27%|██▋ | 8/30 [00:57<02:19, 6.36s/it] [ComfyUI] 33%|███▎ | 10/30 [01:07<01:56, 5.82s/it] [ComfyUI] 40%|████ | 12/30 [01:17<01:39, 5.50s/it] [ComfyUI] 47%|████▋ | 14/30 [01:27<01:24, 5.31s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:37<01:12, 5.19s/it] [ComfyUI] 60%|██████ | 18/30 [01:47<01:01, 5.11s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:56<00:50, 5.05s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:06<00:40, 5.01s/it] [ComfyUI] 80%|████████ | 24/30 [02:16<00:29, 4.98s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:26<00:19, 4.96s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:36<00:09, 4.95s/it] [ComfyUI] 100%|██████████| 30/30 [02:46<00:00, 4.94s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:46<00:00, 5.54s/it] [ComfyUI] Prompt executed in 168.96 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Prediction
shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0IDbk0cc89hchrme0cnv5btr0gjy4StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- GHIBLI style, anime, a girl and a robot shake hands
- fast_mode
- Fast
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "GHIBLI style, anime, a girl and a robot shake hands", "fast_mode": "Fast", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", { input: { frames: 81, prompt: "GHIBLI style, anime, a girl and a robot shake hands", fast_mode: "Fast", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", input={ "frames": 81, "prompt": "GHIBLI style, anime, a girl and a robot shake hands", "fast_mode": "Fast", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", "input": { "frames": 81, "prompt": "GHIBLI style, anime, a girl and a robot shake hands", "fast_mode": "Fast", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-27T20:45:03.451218Z", "created_at": "2025-03-27T20:42:05.028000Z", "data_removed": false, "error": null, "id": "bk0cc89hchrme0cnv5btr0gjy4", "input": { "frames": 81, "prompt": "GHIBLI style, anime, a girl and a robot shake hands", "fast_mode": "Fast", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 2817541705\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors already cached\nChecking inputs\n====================================\nChecking weights\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ\nExecuting node 49, title: Load LoRA, class type: LoraLoader\n[ComfyUI] Requested to load WanTEModel\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 113273.66620521546 10835.4765625 True\nExecuting node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI] Requested to load WAN21\n[ComfyUI] loaded completely 123801.91498866271 27251.406372070312 True\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] Resetting TeaCache state\n[ComfyUI]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:18, 6.86s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:16<03:54, 8.36s/it]\n[ComfyUI] 10%|█ | 3/30 [00:25<03:58, 8.85s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:38<04:28, 10.32s/it]\n[ComfyUI] 20%|██ | 6/30 [00:48<02:59, 7.46s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:57<02:20, 6.37s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:07<01:56, 5.82s/it]\n[ComfyUI] 40%|████ | 12/30 [01:17<01:38, 5.49s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:27<01:24, 5.29s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:37<01:12, 5.17s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:47<01:00, 5.08s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:56<00:50, 5.02s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:06<00:39, 4.98s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:16<00:29, 4.96s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:26<00:19, 4.94s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:35<00:09, 4.93s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 4.92s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.53s/it]\n[ComfyUI] Prompt executed in 178.25 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 178.416278624, "total_time": 178.423218 }, "output": [ "https://replicate.delivery/xezq/OyvaUUDh664BHdRFLxYIZcKtBWnz5pXBD7NHKM4tIi6zqNHF/R8_Wan_00001.mp4" ], "started_at": "2025-03-27T20:42:05.034940Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-fcf67ouxpvdwqbofo4rlt344atb2kqa2qcy3xfiovaakld32eciq", "get": "https://api.replicate.com/v1/predictions/bk0cc89hchrme0cnv5btr0gjy4", "cancel": "https://api.replicate.com/v1/predictions/bk0cc89hchrme0cnv5btr0gjy4/cancel" }, "version": "e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0" }
Generated inRandom seed set to: 2817541705 ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors already cached Checking inputs ==================================== Checking weights ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ Executing node 49, title: Load LoRA, class type: LoraLoader [ComfyUI] Requested to load WanTEModel Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 113273.66620521546 10835.4765625 True Executing node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3 Executing node 3, title: KSampler, class type: KSampler [ComfyUI] Requested to load WAN21 [ComfyUI] loaded completely 123801.91498866271 27251.406372070312 True [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:06<03:18, 6.86s/it] [ComfyUI] 7%|▋ | 2/30 [00:16<03:54, 8.36s/it] [ComfyUI] 10%|█ | 3/30 [00:25<03:58, 8.85s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:38<04:28, 10.32s/it] [ComfyUI] 20%|██ | 6/30 [00:48<02:59, 7.46s/it] [ComfyUI] 27%|██▋ | 8/30 [00:57<02:20, 6.37s/it] [ComfyUI] 33%|███▎ | 10/30 [01:07<01:56, 5.82s/it] [ComfyUI] 40%|████ | 12/30 [01:17<01:38, 5.49s/it] [ComfyUI] 47%|████▋ | 14/30 [01:27<01:24, 5.29s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:37<01:12, 5.17s/it] [ComfyUI] 60%|██████ | 18/30 [01:47<01:00, 5.08s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:56<00:50, 5.02s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:06<00:39, 4.98s/it] [ComfyUI] 80%|████████ | 24/30 [02:16<00:29, 4.96s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:26<00:19, 4.94s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:35<00:09, 4.93s/it] [ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 4.92s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:45<00:00, 5.53s/it] [ComfyUI] Prompt executed in 178.25 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Prediction
shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0ID5kvq6tkrqxrm80cnv5eb261ydgStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- GHIBLI style, anime, a young boy and a giraffe eat hamburgers
- fast_mode
- Balanced
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "GHIBLI style, anime, a young boy and a giraffe eat hamburgers", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", { input: { frames: 81, prompt: "GHIBLI style, anime, a young boy and a giraffe eat hamburgers", fast_mode: "Balanced", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", input={ "frames": 81, "prompt": "GHIBLI style, anime, a young boy and a giraffe eat hamburgers", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", "input": { "frames": 81, "prompt": "GHIBLI style, anime, a young boy and a giraffe eat hamburgers", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-27T20:50:49.551159Z", "created_at": "2025-03-27T20:47:50.975000Z", "data_removed": false, "error": null, "id": "5kvq6tkrqxrm80cnv5eb261ydg", "input": { "frames": 81, "prompt": "GHIBLI style, anime, a young boy and a giraffe eat hamburgers", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 1245252999\n2025-03-27T20:47:51Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpsxru9fg3/weights url=https://replicate.delivery/xezq/fdfTaktncYrl6k5qHKke1GO4RdXkOhMmfVuYTzQ8jirg0kyRB/trained_model.tar\n2025-03-27T20:47:54Z | INFO | [ Complete ] dest=/tmp/tmpsxru9fg3/weights size=\"307 MB\" total_elapsed=2.808s url=https://replicate.delivery/xezq/fdfTaktncYrl6k5qHKke1GO4RdXkOhMmfVuYTzQ8jirg0kyRB/trained_model.tar\nChecking inputs\n====================================\nChecking weights\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 49, title: Load LoRA, class type: LoraLoader\n[ComfyUI] Requested to load WanTEModel\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 113273.66620521546 10835.4765625 True\nExecuting node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI] Requested to load WAN21\n[ComfyUI] loaded completely 123801.91498866271 27251.406372070312 True\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] Resetting TeaCache state\n[ComfyUI]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:07, 6.45s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:15<03:45, 8.07s/it]\n[ComfyUI] 10%|█ | 3/30 [00:24<03:52, 8.59s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:37<04:21, 10.05s/it]\n[ComfyUI] 20%|██ | 6/30 [00:46<02:54, 7.27s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:56<02:16, 6.21s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:06<01:53, 5.68s/it]\n[ComfyUI] 40%|████ | 12/30 [01:15<01:36, 5.37s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:25<01:22, 5.18s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:34<01:10, 5.06s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:44<00:59, 4.99s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:54<00:49, 4.94s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:03<00:39, 4.90s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:13<00:29, 4.87s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:23<00:19, 4.85s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:32<00:09, 4.84s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:42<00:00, 4.83s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:42<00:00, 5.41s/it]\n[ComfyUI] Prompt executed in 174.54 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 177.563422963, "total_time": 178.576159 }, "output": [ "https://replicate.delivery/xezq/Qs7AXcca6J5hCxA49fDGjQvUkaaeeZl4JJ2cffHB82eYKsNHF/R8_Wan_00001.mp4" ], "started_at": "2025-03-27T20:47:51.987736Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-v2kqifbqdyzgrrlfynkzupzdo2k6voqvygh7radfetmpstgpmxba", "get": "https://api.replicate.com/v1/predictions/5kvq6tkrqxrm80cnv5eb261ydg", "cancel": "https://api.replicate.com/v1/predictions/5kvq6tkrqxrm80cnv5eb261ydg/cancel" }, "version": "e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0" }
Generated inRandom seed set to: 1245252999 2025-03-27T20:47:51Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpsxru9fg3/weights url=https://replicate.delivery/xezq/fdfTaktncYrl6k5qHKke1GO4RdXkOhMmfVuYTzQ8jirg0kyRB/trained_model.tar 2025-03-27T20:47:54Z | INFO | [ Complete ] dest=/tmp/tmpsxru9fg3/weights size="307 MB" total_elapsed=2.808s url=https://replicate.delivery/xezq/fdfTaktncYrl6k5qHKke1GO4RdXkOhMmfVuYTzQ8jirg0kyRB/trained_model.tar Checking inputs ==================================== Checking weights ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 49, title: Load LoRA, class type: LoraLoader [ComfyUI] Requested to load WanTEModel Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 113273.66620521546 10835.4765625 True Executing node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3 Executing node 3, title: KSampler, class type: KSampler [ComfyUI] Requested to load WAN21 [ComfyUI] loaded completely 123801.91498866271 27251.406372070312 True [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:06<03:07, 6.45s/it] [ComfyUI] 7%|▋ | 2/30 [00:15<03:45, 8.07s/it] [ComfyUI] 10%|█ | 3/30 [00:24<03:52, 8.59s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:37<04:21, 10.05s/it] [ComfyUI] 20%|██ | 6/30 [00:46<02:54, 7.27s/it] [ComfyUI] 27%|██▋ | 8/30 [00:56<02:16, 6.21s/it] [ComfyUI] 33%|███▎ | 10/30 [01:06<01:53, 5.68s/it] [ComfyUI] 40%|████ | 12/30 [01:15<01:36, 5.37s/it] [ComfyUI] 47%|████▋ | 14/30 [01:25<01:22, 5.18s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:34<01:10, 5.06s/it] [ComfyUI] 60%|██████ | 18/30 [01:44<00:59, 4.99s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:54<00:49, 4.94s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:03<00:39, 4.90s/it] [ComfyUI] 80%|████████ | 24/30 [02:13<00:29, 4.87s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:23<00:19, 4.85s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:32<00:09, 4.84s/it] [ComfyUI] 100%|██████████| 30/30 [02:42<00:00, 4.83s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:42<00:00, 5.41s/it] [ComfyUI] Prompt executed in 174.54 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Prediction
shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0IDhewbz9erbhrme0cntrftngnee0StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- GHIBLI style, a young boy eating hey with a giraffe next to him eating a hamburger
- fast_mode
- Balanced
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "GHIBLI style, a young boy eating hey with a giraffe next to him eating a hamburger", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", { input: { frames: 81, prompt: "GHIBLI style, a young boy eating hey with a giraffe next to him eating a hamburger", fast_mode: "Balanced", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", input={ "frames": 81, "prompt": "GHIBLI style, a young boy eating hey with a giraffe next to him eating a hamburger", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", "input": { "frames": 81, "prompt": "GHIBLI style, a young boy eating hey with a giraffe next to him eating a hamburger", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-27T05:45:33.936290Z", "created_at": "2025-03-27T05:42:46.108000Z", "data_removed": false, "error": null, "id": "hewbz9erbhrme0cntrftngnee0", "input": { "frames": 81, "prompt": "GHIBLI style, a young boy eating hey with a giraffe next to him eating a hamburger", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 604701963\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors already cached\nChecking inputs\n====================================\nChecking weights\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] Resetting TeaCache state\n[ComfyUI]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:10, 6.57s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:15<03:49, 8.20s/it]\n[ComfyUI] 10%|█ | 3/30 [00:25<03:55, 8.73s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:37<04:26, 10.25s/it]\n[ComfyUI] 20%|██ | 6/30 [00:47<02:58, 7.42s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:57<02:19, 6.32s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.77s/it]\n[ComfyUI] 40%|████ | 12/30 [01:16<01:38, 5.45s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.26s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.14s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.05s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:55<00:49, 5.00s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.96s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.94s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:25<00:19, 4.92s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:34<00:09, 4.91s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 4.91s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.49s/it]\n[ComfyUI] Prompt executed in 167.66 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 167.820399589, "total_time": 167.82829 }, "output": [ "https://replicate.delivery/xezq/KhNsfKSjnO2LWClWE25vN6HWPaS1fX2gWIIhoWvebse1fLljC/R8_Wan_00001.mp4" ], "started_at": "2025-03-27T05:42:46.115890Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-vlf73brkudvgqrdqm5qllhde3wt4swnwv3lyklgmc4qjkl4iynjq", "get": "https://api.replicate.com/v1/predictions/hewbz9erbhrme0cntrftngnee0", "cancel": "https://api.replicate.com/v1/predictions/hewbz9erbhrme0cntrftngnee0/cancel" }, "version": "e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0" }
Generated inRandom seed set to: 604701963 ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors already cached Checking inputs ==================================== Checking weights ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode Executing node 3, title: KSampler, class type: KSampler [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:06<03:10, 6.57s/it] [ComfyUI] 7%|▋ | 2/30 [00:15<03:49, 8.20s/it] [ComfyUI] 10%|█ | 3/30 [00:25<03:55, 8.73s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:37<04:26, 10.25s/it] [ComfyUI] 20%|██ | 6/30 [00:47<02:58, 7.42s/it] [ComfyUI] 27%|██▋ | 8/30 [00:57<02:19, 6.32s/it] [ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.77s/it] [ComfyUI] 40%|████ | 12/30 [01:16<01:38, 5.45s/it] [ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.26s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.14s/it] [ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.05s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:55<00:49, 5.00s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.96s/it] [ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.94s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:25<00:19, 4.92s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:34<00:09, 4.91s/it] [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 4.91s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.49s/it] [ComfyUI] Prompt executed in 167.66 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Prediction
shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0IDwbcq1tq0n5rme0cntrp86gvezrStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- GHIBLI style, a tiger sits in a small sailboat drifting on the ocean at sunset, its fur glowing in the golden light as the sky blazes with hues of crimson and violet
- fast_mode
- Balanced
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "GHIBLI style, a tiger sits in a small sailboat drifting on the ocean at sunset, its fur glowing in the golden light as the sky blazes with hues of crimson and violet", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", { input: { frames: 81, prompt: "GHIBLI style, a tiger sits in a small sailboat drifting on the ocean at sunset, its fur glowing in the golden light as the sky blazes with hues of crimson and violet", fast_mode: "Balanced", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", input={ "frames": 81, "prompt": "GHIBLI style, a tiger sits in a small sailboat drifting on the ocean at sunset, its fur glowing in the golden light as the sky blazes with hues of crimson and violet", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", "input": { "frames": 81, "prompt": "GHIBLI style, a tiger sits in a small sailboat drifting on the ocean at sunset, its fur glowing in the golden light as the sky blazes with hues of crimson and violet", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-27T05:59:47.562623Z", "created_at": "2025-03-27T05:57:00.201000Z", "data_removed": false, "error": null, "id": "wbcq1tq0n5rme0cntrp86gvezr", "input": { "frames": 81, "prompt": "GHIBLI style, a tiger sits in a small sailboat drifting on the ocean at sunset, its fur glowing in the golden light as the sky blazes with hues of crimson and violet", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 2068077984\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors already cached\nChecking inputs\n====================================\nChecking weights\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] Resetting TeaCache state\n[ComfyUI]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:10, 6.57s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:15<03:49, 8.20s/it]\n[ComfyUI] 10%|█ | 3/30 [00:25<03:55, 8.74s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:37<04:25, 10.22s/it]\n[ComfyUI] 20%|██ | 6/30 [00:47<02:57, 7.39s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.31s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.77s/it]\n[ComfyUI] 40%|████ | 12/30 [01:16<01:38, 5.46s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.26s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.13s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.05s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:55<00:49, 4.99s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it]\n[ComfyUI] 77%|███████▋ | 23/30 [02:05<00:28, 4.08s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:15<00:31, 5.23s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:25<00:20, 5.09s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:34<00:10, 5.02s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 4.97s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.48s/it]\n[ComfyUI] Prompt executed in 167.18 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 167.35351593, "total_time": 167.361623 }, "output": [ "https://replicate.delivery/xezq/xDR45bEA4gboONI2UJWgEAfy2yfapXUBJXeneSZezyAeUbKHF/R8_Wan_00001.mp4" ], "started_at": "2025-03-27T05:57:00.209107Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-dcbqefz7ud22crqpaiwsbbkgibkcpbuagxqjcvtwx3vp6eooux6q", "get": "https://api.replicate.com/v1/predictions/wbcq1tq0n5rme0cntrp86gvezr", "cancel": "https://api.replicate.com/v1/predictions/wbcq1tq0n5rme0cntrp86gvezr/cancel" }, "version": "e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0" }
Generated inRandom seed set to: 2068077984 ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors already cached Checking inputs ==================================== Checking weights ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode Executing node 3, title: KSampler, class type: KSampler [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:06<03:10, 6.57s/it] [ComfyUI] 7%|▋ | 2/30 [00:15<03:49, 8.20s/it] [ComfyUI] 10%|█ | 3/30 [00:25<03:55, 8.74s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:37<04:25, 10.22s/it] [ComfyUI] 20%|██ | 6/30 [00:47<02:57, 7.39s/it] [ComfyUI] 27%|██▋ | 8/30 [00:57<02:18, 6.31s/it] [ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.77s/it] [ComfyUI] 40%|████ | 12/30 [01:16<01:38, 5.46s/it] [ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.26s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.13s/it] [ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.05s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:55<00:49, 4.99s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it] [ComfyUI] 77%|███████▋ | 23/30 [02:05<00:28, 4.08s/it] [ComfyUI] 80%|████████ | 24/30 [02:15<00:31, 5.23s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:25<00:20, 5.09s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:34<00:10, 5.02s/it] [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 4.97s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.48s/it] [ComfyUI] Prompt executed in 167.18 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Prediction
shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0IDzjem6aeb4hrma0cnts1s7563y0StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- GHIBLI style, anime, a girl is running through new york city eating a burger
- fast_mode
- Balanced
- resolution
- 480p
- aspect_ratio
- 16:9
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "GHIBLI style, anime, a girl is running through new york city eating a burger", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", { input: { frames: 81, prompt: "GHIBLI style, anime, a girl is running through new york city eating a burger", fast_mode: "Balanced", resolution: "480p", aspect_ratio: "16:9", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", input={ "frames": 81, "prompt": "GHIBLI style, anime, a girl is running through new york city eating a burger", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run shridharathi/ghibli-vid using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "shridharathi/ghibli-vid:e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0", "input": { "frames": 81, "prompt": "GHIBLI style, anime, a girl is running through new york city eating a burger", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-27T06:24:49.527176Z", "created_at": "2025-03-27T06:22:02.020000Z", "data_removed": false, "error": null, "id": "zjem6aeb4hrma0cnts1s7563y0", "input": { "frames": 81, "prompt": "GHIBLI style, anime, a girl is running through new york city eating a burger", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 2292279831\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors already cached\nChecking inputs\n====================================\nChecking weights\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] Resetting TeaCache state\n[ComfyUI]\n[ComfyUI] 3%|▎ | 1/30 [00:06<03:10, 6.56s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:15<03:49, 8.20s/it]\n[ComfyUI] 10%|█ | 3/30 [00:25<03:55, 8.73s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:37<04:27, 10.29s/it]\n[ComfyUI] 20%|██ | 6/30 [00:47<02:58, 7.43s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:57<02:19, 6.32s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.77s/it]\n[ComfyUI] 40%|████ | 12/30 [01:16<01:38, 5.45s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.26s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.13s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.05s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:55<00:49, 4.99s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.92s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:25<00:19, 4.91s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:34<00:09, 4.89s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 4.89s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.48s/it]\n[ComfyUI] Prompt executed in 167.33 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 167.499011914, "total_time": 167.507176 }, "output": [ "https://replicate.delivery/xezq/ZpdYHH27kh7DHVl3gZgOEqBW886amWmcobcWhcPzBOWMhKHF/R8_Wan_00001.mp4" ], "started_at": "2025-03-27T06:22:02.028164Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-uumyflz5bbtl6hrdrmf3w2bnw6rvqh4rm2szrdu6s4ks6u4ci5sq", "get": "https://api.replicate.com/v1/predictions/zjem6aeb4hrma0cnts1s7563y0", "cancel": "https://api.replicate.com/v1/predictions/zjem6aeb4hrma0cnts1s7563y0/cancel" }, "version": "e65c22b73e9df842276a6321a718152d591ddad2e36eb6441606bc06c3d133d0" }
Generated inRandom seed set to: 2292279831 ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors already cached Checking inputs ==================================== Checking weights ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ✅ 14b_d21761924b1fb223ad93c834ec74afd9.safetensors exists in loras directory ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode Executing node 3, title: KSampler, class type: KSampler [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:06<03:10, 6.56s/it] [ComfyUI] 7%|▋ | 2/30 [00:15<03:49, 8.20s/it] [ComfyUI] 10%|█ | 3/30 [00:25<03:55, 8.73s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:37<04:27, 10.29s/it] [ComfyUI] 20%|██ | 6/30 [00:47<02:58, 7.43s/it] [ComfyUI] 27%|██▋ | 8/30 [00:57<02:19, 6.32s/it] [ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.77s/it] [ComfyUI] 40%|████ | 12/30 [01:16<01:38, 5.45s/it] [ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.26s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.13s/it] [ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.05s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:55<00:49, 4.99s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.95s/it] [ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.92s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:25<00:19, 4.91s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:34<00:09, 4.89s/it] [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 4.89s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.48s/it] [ComfyUI] Prompt executed in 167.33 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Want to make some of these yourself?
Run this model