deepfates
/
hunyuan-dune
Hunyuan-Video model finetuned on Dune (2021). Trigger word is "DN". Use "A video in the style of DN, DN" at the beginning of your prompt for best results.
- Public
- 391 runs
-
H100
- Fine-tune
Prediction
deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36eIDvc16pgfs29rme0cmjrx9q68r8rStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of DN, DN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of DN, DN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T03:02:31.787881Z", "created_at": "2025-01-24T02:53:54.834000Z", "data_removed": false, "error": null, "id": "vc16pgfs29rme0cmjrx9q68r8r", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 63\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 62\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.31s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:37, 2.02s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.15s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.312 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.68it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.55it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.91it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 85.41it/s]\n[ComfyUI] Prompt executed in 131.74 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 133.19778185, "total_time": 516.953881 }, "output": "https://replicate.delivery/xezq/fIeieteVlDe8sl6gIyhfZdvOSRx1lrIjR4O6bCEKTw25x0CCF/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T03:00:18.590099Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-hawo6itphtlsttznu7mgllp5ktda57k7dcb43hidhbb5ufz4qnga", "get": "https://api.replicate.com/v1/predictions/vc16pgfs29rme0cmjrx9q68r8r", "cancel": "https://api.replicate.com/v1/predictions/vc16pgfs29rme0cmjrx9q68r8r/cancel" }, "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 63 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 62 [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.31s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:37, 2.02s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.15s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.312 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.68it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.55it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.91it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 85.41it/s] [ComfyUI] Prompt executed in 131.74 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36eID3pyvh0rwqdrm80cmjs9a93gft4StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of DN, DN The video clip depicts a breathtaking landscape of a desert or arid region during what appears to be either sunrise or sunset. The sun is low on the horizon, casting a warm, golden light across the scene. The sky is clear, with a gradient of light hues, transitioning from a soft yellow near the horizon to a lighter, almost white color higher up. The terrain is rugged and rocky, with various formations of mountains and hills. The rocks are dark in color, contrasting sharply with the bright sunlight. The landscape is devoid of vegetation, giving it a stark and desolate appearance. The light creates a dramatic effect, with rays of sunlight
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts a breathtaking landscape of a desert or arid region during what appears to be either sunrise or sunset. The sun is low on the horizon, casting a warm, golden light across the scene. The sky is clear, with a gradient of light hues, transitioning from a soft yellow near the horizon to a lighter, almost white color higher up.\nThe terrain is rugged and rocky, with various formations of mountains and hills. The rocks are dark in color, contrasting sharply with the bright sunlight. The landscape is devoid of vegetation, giving it a stark and desolate appearance. The light creates a dramatic effect, with rays of sunlight", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of DN, DN The video clip depicts a breathtaking landscape of a desert or arid region during what appears to be either sunrise or sunset. The sun is low on the horizon, casting a warm, golden light across the scene. The sky is clear, with a gradient of light hues, transitioning from a soft yellow near the horizon to a lighter, almost white color higher up.\nThe terrain is rugged and rocky, with various formations of mountains and hills. The rocks are dark in color, contrasting sharply with the bright sunlight. The landscape is devoid of vegetation, giving it a stark and desolate appearance. The light creates a dramatic effect, with rays of sunlight", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts a breathtaking landscape of a desert or arid region during what appears to be either sunrise or sunset. The sun is low on the horizon, casting a warm, golden light across the scene. The sky is clear, with a gradient of light hues, transitioning from a soft yellow near the horizon to a lighter, almost white color higher up.\nThe terrain is rugged and rocky, with various formations of mountains and hills. The rocks are dark in color, contrasting sharply with the bright sunlight. The landscape is devoid of vegetation, giving it a stark and desolate appearance. The light creates a dramatic effect, with rays of sunlight", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts a breathtaking landscape of a desert or arid region during what appears to be either sunrise or sunset. The sun is low on the horizon, casting a warm, golden light across the scene. The sky is clear, with a gradient of light hues, transitioning from a soft yellow near the horizon to a lighter, almost white color higher up.\\nThe terrain is rugged and rocky, with various formations of mountains and hills. The rocks are dark in color, contrasting sharply with the bright sunlight. The landscape is devoid of vegetation, giving it a stark and desolate appearance. The light creates a dramatic effect, with rays of sunlight", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T03:46:46.747380Z", "created_at": "2025-01-24T03:19:11.291000Z", "data_removed": false, "error": null, "id": "3pyvh0rwqdrm80cmjs9a93gft4", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts a breathtaking landscape of a desert or arid region during what appears to be either sunrise or sunset. The sun is low on the horizon, casting a warm, golden light across the scene. The sky is clear, with a gradient of light hues, transitioning from a soft yellow near the horizon to a lighter, almost white color higher up.\nThe terrain is rugged and rocky, with various formations of mountains and hills. The rocks are dark in color, contrasting sharply with the bright sunlight. The landscape is devoid of vegetation, giving it a stark and desolate appearance. The light creates a dramatic effect, with rays of sunlight", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 138\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.23s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.25s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:26, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.29s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:05<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.28s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.45s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.92it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.05it/s]\n[ComfyUI] Prompt executed in 130.67 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 138.822486455, "total_time": 1655.45638 }, "output": "https://replicate.delivery/xezq/PVZfs3erVtoelJVerIRrQVR93IfCZN3Beqs8kQfGVAhbTeLIUA/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T03:44:27.924893Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-w32kxxyu5do2lhx5kai7zsy6ewj2bmkyiyaxo4by2qmtoaxheyxa", "get": "https://api.replicate.com/v1/predictions/3pyvh0rwqdrm80cmjs9a93gft4", "cancel": "https://api.replicate.com/v1/predictions/3pyvh0rwqdrm80cmjs9a93gft4/cancel" }, "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 138 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 [ComfyUI] Input (height, width, video_length) = (368, 640, 65) [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.23s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.25s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:26, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it] [ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.29s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:05<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.28s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.45s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.92it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.05it/s] [ComfyUI] Prompt executed in 130.67 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36eIDqgy9cv4f8nrma0cmk5wvkw4f5mStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video of DN, a wizard in a blue-gray robe, his face shadowed inside a hood, reads a book in a cyberpunk hacker's room with computers all over the place. He suddenly looks up and directly at the camera with glowing blue eyes. The lights flicker in a glitchy fashion as magic is being cast.
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video of DN, a wizard in a blue-gray robe, his face shadowed inside a hood, reads a book in a cyberpunk hacker's room with computers all over the place. He suddenly looks up and directly at the camera with glowing blue eyes. The lights flicker in a glitchy fashion as magic is being cast.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video of DN, a wizard in a blue-gray robe, his face shadowed inside a hood, reads a book in a cyberpunk hacker's room with computers all over the place. He suddenly looks up and directly at the camera with glowing blue eyes. The lights flicker in a glitchy fashion as magic is being cast.", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video of DN, a wizard in a blue-gray robe, his face shadowed inside a hood, reads a book in a cyberpunk hacker's room with computers all over the place. He suddenly looks up and directly at the camera with glowing blue eyes. The lights flicker in a glitchy fashion as magic is being cast.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video of DN, a wizard in a blue-gray robe, his face shadowed inside a hood, reads a book in a cyberpunk hacker\'s room with computers all over the place. He suddenly looks up and directly at the camera with glowing blue eyes. The lights flicker in a glitchy fashion as magic is being cast.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T18:06:11.537788Z", "created_at": "2025-01-24T18:01:08.165000Z", "data_removed": false, "error": null, "id": "qgy9cv4f8nrma0cmk5wvkw4f5m", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video of DN, a wizard in a blue-gray robe, his face shadowed inside a hood, reads a book in a cyberpunk hacker's room with computers all over the place. He suddenly looks up and directly at the camera with glowing blue eyes. The lights flicker in a glitchy fashion as magic is being cast.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_1cdbaedb-28f4-4b69-acd7-f859bf03dcb0.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_1cdbaedb-28f4-4b69-acd7-f859bf03dcb0.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\nExecuting node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.64it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.58it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.62it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.38it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.03it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 68\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 70\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_1cdbaedb-28f4-4b69-acd7-f859bf03dcb0 with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.43s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.07s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:42, 2.19s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.23s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.26s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:40, 2.28s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:38, 2.29s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:18<01:36, 2.30s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.30s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:32, 2.30s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:25<01:29, 2.31s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.31s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.31s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:23, 2.31s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.31s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.31s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:16, 2.31s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.31s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.31s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.31s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:48<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.31s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:55<00:59, 2.31s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.31s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.31s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.31s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:11<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:18<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:25<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.31s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.31s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:34<00:20, 2.31s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.31s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.31s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:41<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:48<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] Allocated memory: memory=12.760 GB\n[ComfyUI] Max allocated memory: max_memory=15.559 GB\n[ComfyUI] Max reserved memory: max_reserved=16.875 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.52s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.31s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.34it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.46it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.96it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.87it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 84.41it/s]\n[ComfyUI] Prompt executed in 149.61 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 154.183016633, "total_time": 303.372788 }, "output": "https://replicate.delivery/xezq/RXXVdoEVfARdZSPOff1Q8l4TKvXCdsv7fabEz3JJJbmMJihQB/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T18:03:37.354772Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-jlbd3n2553usxmquknkxdsnw5d2buqhqdob7a3pua6dopsu3oaxa", "get": "https://api.replicate.com/v1/predictions/qgy9cv4f8nrma0cmk5wvkw4f5m", "cancel": "https://api.replicate.com/v1/predictions/qgy9cv4f8nrma0cmk5wvkw4f5m/cancel" }, "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_1cdbaedb-28f4-4b69-acd7-f859bf03dcb0.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_1cdbaedb-28f4-4b69-acd7-f859bf03dcb0.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader [ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 Executing node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.64it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.58it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.62it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.38it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.03it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 68 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 70 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_1cdbaedb-28f4-4b69-acd7-f859bf03dcb0 with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.43s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.07s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:42, 2.19s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.23s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.26s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:40, 2.28s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:38, 2.29s/it] [ComfyUI] 16%|█▌ | 8/50 [00:18<01:36, 2.30s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.30s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:32, 2.30s/it] [ComfyUI] 22%|██▏ | 11/50 [00:25<01:29, 2.31s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.31s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.31s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:23, 2.31s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.31s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.31s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:16, 2.31s/it] [ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.31s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.31s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.31s/it] [ComfyUI] 42%|████▏ | 21/50 [00:48<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.31s/it] [ComfyUI] 48%|████▊ | 24/50 [00:55<00:59, 2.31s/it] [ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.31s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.31s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.31s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:11<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:18<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:25<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.31s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.31s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:34<00:20, 2.31s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.31s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.31s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:41<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:48<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] Allocated memory: memory=12.760 GB [ComfyUI] Max allocated memory: max_memory=15.559 GB [ComfyUI] Max reserved memory: max_reserved=16.875 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.52s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.31s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.34it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.46it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.96it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.87it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 84.41it/s] [ComfyUI] Prompt executed in 149.61 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36eIDmjx483fxk1rme0cmk5wtp9w708StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of DN, DN The video clip depicts a high-speed chase scene set in a desert environment. The primary focus is on a vehicle, which appears to be a modified off-road truck, speeding across the sandy terrain. The truck is kicking up a large cloud of sand and dust, indicating its high speed and the rough terrain it is traversing. The vehicle is equipped with a large, mounted weapon system on its back, suggesting it is part of a military or law enforcement operation. The background features a vast desert landscape with sand dunes stretching into the distance. The sky is clear with a few scattered clouds, and the overall lighting suggests it is daytime. The
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts a high-speed chase scene set in a desert environment. The primary focus is on a vehicle, which appears to be a modified off-road truck, speeding across the sandy terrain. The truck is kicking up a large cloud of sand and dust, indicating its high speed and the rough terrain it is traversing. The vehicle is equipped with a large, mounted weapon system on its back, suggesting it is part of a military or law enforcement operation.\nThe background features a vast desert landscape with sand dunes stretching into the distance. The sky is clear with a few scattered clouds, and the overall lighting suggests it is daytime. The", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of DN, DN The video clip depicts a high-speed chase scene set in a desert environment. The primary focus is on a vehicle, which appears to be a modified off-road truck, speeding across the sandy terrain. The truck is kicking up a large cloud of sand and dust, indicating its high speed and the rough terrain it is traversing. The vehicle is equipped with a large, mounted weapon system on its back, suggesting it is part of a military or law enforcement operation.\nThe background features a vast desert landscape with sand dunes stretching into the distance. The sky is clear with a few scattered clouds, and the overall lighting suggests it is daytime. The", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts a high-speed chase scene set in a desert environment. The primary focus is on a vehicle, which appears to be a modified off-road truck, speeding across the sandy terrain. The truck is kicking up a large cloud of sand and dust, indicating its high speed and the rough terrain it is traversing. The vehicle is equipped with a large, mounted weapon system on its back, suggesting it is part of a military or law enforcement operation.\nThe background features a vast desert landscape with sand dunes stretching into the distance. The sky is clear with a few scattered clouds, and the overall lighting suggests it is daytime. The", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts a high-speed chase scene set in a desert environment. The primary focus is on a vehicle, which appears to be a modified off-road truck, speeding across the sandy terrain. The truck is kicking up a large cloud of sand and dust, indicating its high speed and the rough terrain it is traversing. The vehicle is equipped with a large, mounted weapon system on its back, suggesting it is part of a military or law enforcement operation.\\nThe background features a vast desert landscape with sand dunes stretching into the distance. The sky is clear with a few scattered clouds, and the overall lighting suggests it is daytime. The", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T18:11:09.627190Z", "created_at": "2025-01-24T18:01:36.408000Z", "data_removed": false, "error": null, "id": "mjx483fxk1rme0cmk5wtp9w708", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts a high-speed chase scene set in a desert environment. The primary focus is on a vehicle, which appears to be a modified off-road truck, speeding across the sandy terrain. The truck is kicking up a large cloud of sand and dust, indicating its high speed and the rough terrain it is traversing. The vehicle is equipped with a large, mounted weapon system on its back, suggesting it is part of a military or law enforcement operation.\nThe background features a vast desert landscape with sand dunes stretching into the distance. The sky is clear with a few scattered clouds, and the overall lighting suggests it is daytime. The", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_2b90db2a-8b09-47a6-b995-94705b568d47.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_2b90db2a-8b09-47a6-b995-94705b568d47.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 138\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_2b90db2a-8b09-47a6-b995-94705b568d47 with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:37, 2.02s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.15s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.22s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:38, 2.28s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:36, 2.29s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.30s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:32, 2.30s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:16, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:11<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.31s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:34<00:20, 2.31s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.31s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.31s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.31s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.31s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.31s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.31s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.31s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.31s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.344 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.48s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.62it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.51it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.24it/s]\n[ComfyUI] Prompt executed in 142.71 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 150.055509117, "total_time": 573.21919 }, "output": "https://replicate.delivery/xezq/OhMSpyhEWWJ1HBLiZiYO3IebNX9EjAriORzHHfi3iOs9mYIUA/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T18:08:39.571680Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-d3htcddzwgxp7z6qsugj2l2nguwy653nt3l2zzjjteqzhdl3gbga", "get": "https://api.replicate.com/v1/predictions/mjx483fxk1rme0cmk5wtp9w708", "cancel": "https://api.replicate.com/v1/predictions/mjx483fxk1rme0cmk5wtp9w708/cancel" }, "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_2b90db2a-8b09-47a6-b995-94705b568d47.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_2b90db2a-8b09-47a6-b995-94705b568d47.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 138 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_2b90db2a-8b09-47a6-b995-94705b568d47 with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:37, 2.02s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.15s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.22s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:38, 2.28s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:36, 2.29s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.30s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:32, 2.30s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:16, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:11<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.31s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:34<00:20, 2.31s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.31s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.31s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.31s/it] [ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.31s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.31s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.31s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.31s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.31s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.344 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.48s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.62it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.51it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.24it/s] [ComfyUI] Prompt executed in 142.71 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36eIDvvt67ym631rma0cmk5xb220pt4StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of DN, DN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of DN, DN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip features a close-up of a person\'s face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person\'s facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person\'s gaze conveying a sense of determination or resolve.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T18:13:34.810830Z", "created_at": "2025-01-24T18:02:11.352000Z", "data_removed": false, "error": null, "id": "vvt67ym631rma0cmk5xb220pt4", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_6038c967-330e-441e-aa0c-3f9ddf687fe4.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_6038c967-330e-441e-aa0c-3f9ddf687fe4.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 135\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_6038c967-330e-441e-aa0c-3f9ddf687fe4 with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.31s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.344 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.48s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.92it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.54it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.01it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.93it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.93it/s]\n[ComfyUI] Prompt executed in 138.20 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 144.490286142, "total_time": 683.45883 }, "output": "https://replicate.delivery/xezq/eoP0mnqSRR2tG6XXpdUcWOLanaXmsrDY8ZkDC4uxUTEnUMEKA/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T18:11:10.320544Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-hyl4nzasyntdrauxm6fvxkb7nbqqaa6h76y3i6yzkayc5uvowtfa", "get": "https://api.replicate.com/v1/predictions/vvt67ym631rma0cmk5xb220pt4", "cancel": "https://api.replicate.com/v1/predictions/vvt67ym631rma0cmk5xb220pt4/cancel" }, "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_6038c967-330e-441e-aa0c-3f9ddf687fe4.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_6038c967-330e-441e-aa0c-3f9ddf687fe4.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 135 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_6038c967-330e-441e-aa0c-3f9ddf687fe4 with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.31s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.344 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.48s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.92it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.54it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.01it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.93it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.93it/s] [ComfyUI] Prompt executed in 138.20 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36eID41a1rcfbbhrme0cmn40rrsxtpmStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of DN, DN The video clip depicts a man walking on a rooftop at night
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1.3
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts a man walking on a rooftop at night", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.3, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of DN, DN The video clip depicts a man walking on a rooftop at night", frame_rate: 16, num_frames: 66, lora_strength: 1.3, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts a man walking on a rooftop at night", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.3, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts a man walking on a rooftop at night", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.3, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-27T18:26:51.409524Z", "created_at": "2025-01-27T18:24:22.876000Z", "data_removed": false, "error": null, "id": "41a1rcfbbhrme0cmn40rrsxtpm", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts a man walking on a rooftop at night", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.3, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_c4b351a6-c332-411b-8cbe-1239ec5385de.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_c4b351a6-c332-411b-8cbe-1239ec5385de.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 22\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 23\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_c4b351a6-c332-411b-8cbe-1239ec5385de with strength: 1.3\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.15s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.28s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:26, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:10, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:54, 2.29s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:05<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.28s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.344 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.23s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.89it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.54it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.03it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.20it/s]\n[ComfyUI] Prompt executed in 142.40 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 147.690874543, "total_time": 148.533524 }, "output": "https://replicate.delivery/xezq/9BIxJ5t2ZfXVJSwxaSeWsIIbBWxbuOMJPw7svXIGgTVrHYJUA/HunyuanVideo_00001.mp4", "started_at": "2025-01-27T18:24:23.718649Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-fdjnqmts3746dbeytl44pykqyxp47nd4gpwbu5st75lypqce7nxq", "get": "https://api.replicate.com/v1/predictions/41a1rcfbbhrme0cmn40rrsxtpm", "cancel": "https://api.replicate.com/v1/predictions/41a1rcfbbhrme0cmn40rrsxtpm/cancel" }, "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_c4b351a6-c332-411b-8cbe-1239ec5385de.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_c4b351a6-c332-411b-8cbe-1239ec5385de.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 22 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 23 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_c4b351a6-c332-411b-8cbe-1239ec5385de with strength: 1.3 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.15s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.28s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:26, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:10, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it] [ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:54, 2.29s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:05<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.28s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.344 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.23s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.89it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.54it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.03it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.20it/s] [ComfyUI] Prompt executed in 142.40 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36eIDk26qg1mms9rma0cmn4aagq61twStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of DN, DN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1.2
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of DN, DN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", frame_rate: 16, num_frames: 66, lora_strength: 1.2, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-dune:4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-dune using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts A detective\'s weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-27T18:55:38.359233Z", "created_at": "2025-01-27T18:44:45.898000Z", "data_removed": false, "error": null, "id": "k26qg1mms9rma0cmn4aagq61tw", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of DN, DN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_877afdae-35b2-46d7-acf3-18c4089252a3.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_877afdae-35b2-46d7-acf3-18c4089252a3.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 38\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 38\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_877afdae-35b2-46d7-acf3-18c4089252a3 with strength: 1.2\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.47s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.69it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.53it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.01it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.93it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.51it/s]\n[ComfyUI] Prompt executed in 142.47 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 150.930592604, "total_time": 652.461233 }, "output": "https://replicate.delivery/xezq/l1kM58vEa266GpLz8nvBv7vP1pwkjdahf9TbA7KnwjJVRsEKA/HunyuanVideo_00001.mp4", "started_at": "2025-01-27T18:53:07.428641Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-vbqblyo3d4e3ig7gyasus3gat6vghaoltomiwl36mbbnuf3qgslq", "get": "https://api.replicate.com/v1/predictions/k26qg1mms9rma0cmn4aagq61tw", "cancel": "https://api.replicate.com/v1/predictions/k26qg1mms9rma0cmn4aagq61tw/cancel" }, "version": "4fbe2f9a8c5f5912fa4bba528d5b2e27494557ab922356e3f6374e3353e5c36e" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_877afdae-35b2-46d7-acf3-18c4089252a3.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_877afdae-35b2-46d7-acf3-18c4089252a3.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 38 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 38 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_877afdae-35b2-46d7-acf3-18c4089252a3 with strength: 1.2 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.30s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.47s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.69it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.53it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.01it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.93it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.51it/s] [ComfyUI] Prompt executed in 142.47 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Want to make some of these yourself?
Run this model