deepfates/hunyuan-inception
Hunyuan-Video model finetuned on Inception (2010). Trigger word is "NCPTN". Use "A video in the style of NCPTN, NCPTN" at the beginning of your prompt for best results.
Prediction
deepfates/hunyuan-inception:a471cf82IDqkc070y999rm80cmjpsbbtm42gStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- crf
- 19
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NCPTN, NCPTN The video clip depicts a detailed portrait of a woman's face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire. The woman's expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on
- lora_url
- scheduler
- DPMSolverMultistepScheduler
- flow_shift
- 9
- frame_rate
- 16
- num_frames
- 66
- enhance_end
- 1
- enhance_start
- 0
- force_offload
- lora_strength
- 1
- enhance_double
- enhance_single
- enhance_weight
- 0.3
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts a detailed portrait of a woman's face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire.\nThe woman's expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 }Install Replicate’s Node.js client library:npm install replicateImport and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", { input: { crf: 19, seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NCPTN, NCPTN The video clip depicts a detailed portrait of a woman's face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire.\nThe woman's expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on", lora_url: "", scheduler: "DPMSolverMultistepScheduler", flow_shift: 9, frame_rate: 16, num_frames: 66, enhance_end: 1, enhance_start: 0, force_offload: true, lora_strength: 1, enhance_double: true, enhance_single: true, enhance_weight: 0.3, guidance_scale: 6, denoise_strength: 1 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicateImport the client:import replicateRun deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", input={ "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts a detailed portrait of a woman's face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire.\nThe woman's expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": True, "lora_strength": 1, "enhance_double": True, "enhance_single": True, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 } ) # To access the file URL: print(output.url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output.read())To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", "input": { "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts a detailed portrait of a woman\'s face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire.\\nThe woman\'s expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictionsTo learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T00:28:18.334888Z", "created_at": "2025-01-24T00:25:09.706000Z", "data_removed": false, "error": null, "id": "qkc070y999rm80cmjpsbbtm42g", "input": { "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts a detailed portrait of a woman's face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire.\nThe woman's expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 146\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.15s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.28s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:10, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.29s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:54, 2.29s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.28s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.312 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.45s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.23s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.26s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 26.01it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.96it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.34it/s]\n[ComfyUI] Prompt executed in 133.10 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 135.042410369, "total_time": 188.628888 }, "output": "https://replicate.delivery/xezq/szhjQIjDsnrNGdb33anp8TdZ5slglkZa5C2bP5nneLaRhEEKA/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T00:26:03.292478Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-udcrbdzb3qt5swdmgsglx34sbrt3lovqwsacxaex47zg4o4guzia", "get": "https://api.replicate.com/v1/predictions/qkc070y999rm80cmjpsbbtm42g", "cancel": "https://api.replicate.com/v1/predictions/qkc070y999rm80cmjpsbbtm42g/cancel" }, "version": "a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e" }Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 146 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.15s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.28s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:10, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.29s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:54, 2.29s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.28s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.312 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.45s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.23s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.26s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 26.01it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.96it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.34it/s] [ComfyUI] Prompt executed in 133.10 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4Prediction
deepfates/hunyuan-inception:a471cf82IDahdxf9v5wnrma0cmjpfv73d8vwStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- crf
- 19
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NCPTN, NCPTN The video clip depicts a couple sitting on a bench in an outdoor environment. The man, dressed in a dark blue shirt and black pants, is cradling the woman's face in his hands, engaging in an intimate moment. The woman, wearing a white dress, is clutching what appears to be an envelope in her right hand, possibly containing a letter or a document. The bench they are sitting on is white, and there is a large tree with dark green leaves in the background, providing a natural and serene setting for their interaction. The overall scene conveys a sense of affection, love, and possibly contemplation.
- lora_url
- scheduler
- DPMSolverMultistepScheduler
- flow_shift
- 9
- frame_rate
- 16
- num_frames
- 66
- enhance_end
- 1
- enhance_start
- 0
- force_offload
- lora_strength
- 1
- enhance_double
- enhance_single
- enhance_weight
- 0.3
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts a couple sitting on a bench in an outdoor environment. The man, dressed in a dark blue shirt and black pants, is cradling the woman's face in his hands, engaging in an intimate moment. The woman, wearing a white dress, is clutching what appears to be an envelope in her right hand, possibly containing a letter or a document. The bench they are sitting on is white, and there is a large tree with dark green leaves in the background, providing a natural and serene setting for their interaction. The overall scene conveys a sense of affection, love, and possibly contemplation.", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 }Install Replicate’s Node.js client library:npm install replicateImport and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", { input: { crf: 19, seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NCPTN, NCPTN The video clip depicts a couple sitting on a bench in an outdoor environment. The man, dressed in a dark blue shirt and black pants, is cradling the woman's face in his hands, engaging in an intimate moment. The woman, wearing a white dress, is clutching what appears to be an envelope in her right hand, possibly containing a letter or a document. The bench they are sitting on is white, and there is a large tree with dark green leaves in the background, providing a natural and serene setting for their interaction. The overall scene conveys a sense of affection, love, and possibly contemplation.", lora_url: "", scheduler: "DPMSolverMultistepScheduler", flow_shift: 9, frame_rate: 16, num_frames: 66, enhance_end: 1, enhance_start: 0, force_offload: true, lora_strength: 1, enhance_double: true, enhance_single: true, enhance_weight: 0.3, guidance_scale: 6, denoise_strength: 1 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicateImport the client:import replicateRun deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", input={ "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts a couple sitting on a bench in an outdoor environment. The man, dressed in a dark blue shirt and black pants, is cradling the woman's face in his hands, engaging in an intimate moment. The woman, wearing a white dress, is clutching what appears to be an envelope in her right hand, possibly containing a letter or a document. The bench they are sitting on is white, and there is a large tree with dark green leaves in the background, providing a natural and serene setting for their interaction. The overall scene conveys a sense of affection, love, and possibly contemplation.", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": True, "lora_strength": 1, "enhance_double": True, "enhance_single": True, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 } ) # To access the file URL: print(output.url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output.read())To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", "input": { "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts a couple sitting on a bench in an outdoor environment. The man, dressed in a dark blue shirt and black pants, is cradling the woman\'s face in his hands, engaging in an intimate moment. The woman, wearing a white dress, is clutching what appears to be an envelope in her right hand, possibly containing a letter or a document. The bench they are sitting on is white, and there is a large tree with dark green leaves in the background, providing a natural and serene setting for their interaction. The overall scene conveys a sense of affection, love, and possibly contemplation.", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictionsTo learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T00:14:22.967504Z", "created_at": "2025-01-24T00:03:59.077000Z", "data_removed": false, "error": null, "id": "ahdxf9v5wnrma0cmjpfv73d8vw", "input": { "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts a couple sitting on a bench in an outdoor environment. The man, dressed in a dark blue shirt and black pants, is cradling the woman's face in his hands, engaging in an intimate moment. The woman, wearing a white dress, is clutching what appears to be an envelope in her right hand, possibly containing a letter or a document. The bench they are sitting on is white, and there is a large tree with dark green leaves in the background, providing a natural and serene setting for their interaction. The overall scene conveys a sense of affection, love, and possibly contemplation.", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 140\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.15s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:10, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.29s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:54, 2.29s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.28s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.344 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.94it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.20it/s]\n[ComfyUI] Prompt executed in 133.25 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 140.641231705, "total_time": 623.890504 }, "output": "https://replicate.delivery/xezq/XnfBPNlddiT2AyjEKsEfv67E9623f7uKCXChRQS9dQK8qRQoA/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T00:12:02.326273Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-5agoxdwy6mm4wzaraqke3dqcwpk7hzh5h3y44zfjigl2l57auz2q", "get": "https://api.replicate.com/v1/predictions/ahdxf9v5wnrma0cmjpfv73d8vw", "cancel": "https://api.replicate.com/v1/predictions/ahdxf9v5wnrma0cmjpfv73d8vw/cancel" }, "version": "a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e" }Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 140 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.15s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:10, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.29s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:54, 2.29s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.28s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.344 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.94it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.20it/s] [ComfyUI] Prompt executed in 133.25 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4Prediction
deepfates/hunyuan-inception:a471cf82IDcb6m558cf1rm80cmjrxtrj4s6gStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NCPTN, NCPTN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }Install Replicate’s Node.js client library:npm install replicateImport and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NCPTN, NCPTN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicateImport the client:import replicateRun deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) # To access the file URL: print(output.url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output.read())To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictionsTo learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T03:02:54.484908Z", "created_at": "2025-01-24T02:53:59.800000Z", "data_removed": false, "error": null, "id": "cb6m558cf1rm80cmjrxtrj4s6g", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 67\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 66\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:53, 2.31s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:37, 2.02s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.15s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.28s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:36, 2.29s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.30s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.344 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.48s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.26s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.25it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.52it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.96it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.88it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 85.48it/s]\n[ComfyUI] Prompt executed in 134.69 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 136.16571961, "total_time": 534.684908 }, "output": "https://replicate.delivery/xezq/SAgyDJYuW0KlJ5ZTswDtNZkF3hOSak933JGFvLBQVJq30CCF/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T03:00:38.319189Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-arj7zjgisgmteba2bypr7fabl56qlji7ulazkyqzc6tkx2urqhfa", "get": "https://api.replicate.com/v1/predictions/cb6m558cf1rm80cmjrxtrj4s6g", "cancel": "https://api.replicate.com/v1/predictions/cb6m558cf1rm80cmjrxtrj4s6g/cancel" }, "version": "a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e" }Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 67 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 66 [ComfyUI] Input (height, width, video_length) = (368, 640, 65) [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:53, 2.31s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:37, 2.02s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.15s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.28s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:36, 2.29s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.30s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.344 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.48s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.26s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.25it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.52it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.96it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.88it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 85.48it/s] [ComfyUI] Prompt executed in 134.69 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4Prediction
deepfates/hunyuan-inception:a471cf82IDa47ebx5sa1rmc0cmjs3vd3fsarStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NCPTN, NCPTN The video clip features a man wearing a white shirt and dark pants standing in the middle of a two-lane road. The road is bordered by trees on one side and a grassy area with small bushes on the other. The man is holding a gun in his right hand, and his left hand is resting on his hip. He appears to be in a state of contemplation or waiting, as he is standing with a somewhat stoic expression. The setting suggests a calm and serene environment, with the trees providing a natural backdrop and the road leading off into the distance. The overall mood of the scene is
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip features a man wearing a white shirt and dark pants standing in the middle of a two-lane road. The road is bordered by trees on one side and a grassy area with small bushes on the other. The man is holding a gun in his right hand, and his left hand is resting on his hip. He appears to be in a state of contemplation or waiting, as he is standing with a somewhat stoic expression. The setting suggests a calm and serene environment, with the trees providing a natural backdrop and the road leading off into the distance. The overall mood of the scene is", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }Install Replicate’s Node.js client library:npm install replicateImport and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NCPTN, NCPTN The video clip features a man wearing a white shirt and dark pants standing in the middle of a two-lane road. The road is bordered by trees on one side and a grassy area with small bushes on the other. The man is holding a gun in his right hand, and his left hand is resting on his hip. He appears to be in a state of contemplation or waiting, as he is standing with a somewhat stoic expression. The setting suggests a calm and serene environment, with the trees providing a natural backdrop and the road leading off into the distance. The overall mood of the scene is", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicateImport the client:import replicateRun deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip features a man wearing a white shirt and dark pants standing in the middle of a two-lane road. The road is bordered by trees on one side and a grassy area with small bushes on the other. The man is holding a gun in his right hand, and his left hand is resting on his hip. He appears to be in a state of contemplation or waiting, as he is standing with a somewhat stoic expression. The setting suggests a calm and serene environment, with the trees providing a natural backdrop and the road leading off into the distance. The overall mood of the scene is", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) # To access the file URL: print(output.url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output.read())To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip features a man wearing a white shirt and dark pants standing in the middle of a two-lane road. The road is bordered by trees on one side and a grassy area with small bushes on the other. The man is holding a gun in his right hand, and his left hand is resting on his hip. He appears to be in a state of contemplation or waiting, as he is standing with a somewhat stoic expression. The setting suggests a calm and serene environment, with the trees providing a natural backdrop and the road leading off into the distance. The overall mood of the scene is", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictionsTo learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T03:58:35.034468Z", "created_at": "2025-01-24T03:07:50.480000Z", "data_removed": false, "error": null, "id": "a47ebx5sa1rmc0cmjs3vd3fsar", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip features a man wearing a white shirt and dark pants standing in the middle of a two-lane road. The road is bordered by trees on one side and a grassy area with small bushes on the other. The man is holding a gun in his right hand, and his left hand is resting on his hip. He appears to be in a state of contemplation or waiting, as he is standing with a somewhat stoic expression. The setting suggests a calm and serene environment, with the trees providing a natural backdrop and the road leading off into the distance. The overall mood of the scene is", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 136\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:53, 2.31s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:37, 2.03s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.15s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.22s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.26s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:40, 2.27s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:38, 2.28s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:36, 2.29s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.29s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.30s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.344 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.94it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.54it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.02it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.94it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.86it/s]\n[ComfyUI] Prompt executed in 130.89 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 132.356643091, "total_time": 3044.554468 }, "output": "https://replicate.delivery/xezq/zQUFfHfPEbgvhEPV4mem4nJISELlOJUH59AkxPfXdgxuegBhC/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T03:56:22.677825Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-eqau7nh4bosha2z4i5kted4yru3j2bsdf4rfuop6cqcd2qeygtna", "get": "https://api.replicate.com/v1/predictions/a47ebx5sa1rmc0cmjs3vd3fsar", "cancel": "https://api.replicate.com/v1/predictions/a47ebx5sa1rmc0cmjs3vd3fsar/cancel" }, "version": "a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e" }Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 136 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 [ComfyUI] Input (height, width, video_length) = (368, 640, 65) [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:53, 2.31s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:37, 2.03s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.15s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.22s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.26s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:40, 2.27s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:38, 2.28s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:36, 2.29s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.29s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.30s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.344 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.94it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.54it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.02it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.94it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.86it/s] [ComfyUI] Prompt executed in 130.89 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4Prediction
deepfates/hunyuan-inception:a471cf82IDzs4dxmr9zhrm80cmk5xahc77k8StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NCPTN, NCPTN The video clip depicts a high-speed chase scene set in a desert environment. The primary focus is on a vehicle, which appears to be a modified off-road truck, speeding across the sandy terrain. The truck is kicking up a large cloud of sand and dust, indicating its high speed and the rough terrain it is traversing. The vehicle is equipped with a large, mounted weapon system on its back, suggesting it is part of a military or law enforcement operation. The background features a vast desert landscape with sand dunes stretching into the distance. The sky is clear with a few scattered clouds, and the overall lighting suggests it is daytime. The
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts a high-speed chase scene set in a desert environment. The primary focus is on a vehicle, which appears to be a modified off-road truck, speeding across the sandy terrain. The truck is kicking up a large cloud of sand and dust, indicating its high speed and the rough terrain it is traversing. The vehicle is equipped with a large, mounted weapon system on its back, suggesting it is part of a military or law enforcement operation.\nThe background features a vast desert landscape with sand dunes stretching into the distance. The sky is clear with a few scattered clouds, and the overall lighting suggests it is daytime. The", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }Install Replicate’s Node.js client library:npm install replicateImport and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NCPTN, NCPTN The video clip depicts a high-speed chase scene set in a desert environment. The primary focus is on a vehicle, which appears to be a modified off-road truck, speeding across the sandy terrain. The truck is kicking up a large cloud of sand and dust, indicating its high speed and the rough terrain it is traversing. The vehicle is equipped with a large, mounted weapon system on its back, suggesting it is part of a military or law enforcement operation.\nThe background features a vast desert landscape with sand dunes stretching into the distance. The sky is clear with a few scattered clouds, and the overall lighting suggests it is daytime. The", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicateImport the client:import replicateRun deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts a high-speed chase scene set in a desert environment. The primary focus is on a vehicle, which appears to be a modified off-road truck, speeding across the sandy terrain. The truck is kicking up a large cloud of sand and dust, indicating its high speed and the rough terrain it is traversing. The vehicle is equipped with a large, mounted weapon system on its back, suggesting it is part of a military or law enforcement operation.\nThe background features a vast desert landscape with sand dunes stretching into the distance. The sky is clear with a few scattered clouds, and the overall lighting suggests it is daytime. The", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) # To access the file URL: print(output.url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output.read())To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts a high-speed chase scene set in a desert environment. The primary focus is on a vehicle, which appears to be a modified off-road truck, speeding across the sandy terrain. The truck is kicking up a large cloud of sand and dust, indicating its high speed and the rough terrain it is traversing. The vehicle is equipped with a large, mounted weapon system on its back, suggesting it is part of a military or law enforcement operation.\\nThe background features a vast desert landscape with sand dunes stretching into the distance. The sky is clear with a few scattered clouds, and the overall lighting suggests it is daytime. The", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictionsTo learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T18:09:44.463317Z", "created_at": "2025-01-24T18:01:39.580000Z", "data_removed": false, "error": null, "id": "zs4dxmr9zhrm80cmk5xahc77k8", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts a high-speed chase scene set in a desert environment. The primary focus is on a vehicle, which appears to be a modified off-road truck, speeding across the sandy terrain. The truck is kicking up a large cloud of sand and dust, indicating its high speed and the rough terrain it is traversing. The vehicle is equipped with a large, mounted weapon system on its back, suggesting it is part of a military or law enforcement operation.\nThe background features a vast desert landscape with sand dunes stretching into the distance. The sky is clear with a few scattered clouds, and the overall lighting suggests it is daytime. The", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_9adafd65-3907-4d8d-92ca-5c4d69e6e60f.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_9adafd65-3907-4d8d-92ca-5c4d69e6e60f.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 142\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_9adafd65-3907-4d8d-92ca-5c4d69e6e60f with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:40, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.23s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:38, 2.25s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.26s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.27s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:28, 2.28s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:26<01:26, 2.28s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.28s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:42<01:10, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:54, 2.29s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:05<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:21<00:32, 2.29s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:37<00:16, 2.29s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:53<00:00, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:53<00:00, 2.28s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 26.00it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.96it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 63.63it/s]\n[ComfyUI] Prompt executed in 142.66 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 164.955653206, "total_time": 484.883317 }, "output": "https://replicate.delivery/xezq/6fRa0fIgqRmkuEaGR5Kxp8WAaFFLZP0ratA9s9iSgSOolYIUA/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T18:06:59.507664Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-aaofqpvbtsgyxzou7hht7w36nggxltcjsyocjhrgrh3kwwhgumfq", "get": "https://api.replicate.com/v1/predictions/zs4dxmr9zhrm80cmk5xahc77k8", "cancel": "https://api.replicate.com/v1/predictions/zs4dxmr9zhrm80cmk5xahc77k8/cancel" }, "version": "a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e" }Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_9adafd65-3907-4d8d-92ca-5c4d69e6e60f.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_9adafd65-3907-4d8d-92ca-5c4d69e6e60f.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 142 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_9adafd65-3907-4d8d-92ca-5c4d69e6e60f with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:40, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.23s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:38, 2.25s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.26s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.27s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:28, 2.28s/it] [ComfyUI] 24%|██▍ | 12/50 [00:26<01:26, 2.28s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.28s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:42<01:10, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it] [ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:54, 2.29s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:05<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:21<00:32, 2.29s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:37<00:16, 2.29s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:53<00:00, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:53<00:00, 2.28s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 26.00it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.96it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 63.63it/s] [ComfyUI] Prompt executed in 142.66 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4Prediction
deepfates/hunyuan-inception:a471cf82IDw0w0hvmj2srme0cmk5xam3pb2rStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NCPTN, NCPTN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }Install Replicate’s Node.js client library:npm install replicateImport and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NCPTN, NCPTN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicateImport the client:import replicateRun deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) # To access the file URL: print(output.url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output.read())To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip features a close-up of a person\'s face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person\'s facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person\'s gaze conveying a sense of determination or resolve.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictionsTo learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T18:12:14.572867Z", "created_at": "2025-01-24T18:02:14.422000Z", "data_removed": false, "error": null, "id": "w0w0hvmj2srme0cmk5xam3pb2r", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_1f80f0f5-cc18-4ee1-b325-05df5412a17b.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_1f80f0f5-cc18-4ee1-b325-05df5412a17b.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 139\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_1f80f0f5-cc18-4ee1-b325-05df5412a17b with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.29s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.13s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:40, 2.19s/it]\n[ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.23s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:38, 2.25s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.26s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:28, 2.28s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:26<01:26, 2.28s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:42<01:10, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:54, 2.29s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:05<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:21<00:32, 2.29s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:37<00:16, 2.29s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:53<00:00, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:53<00:00, 2.28s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.344 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.90it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 63.98it/s]\n[ComfyUI] Prompt executed in 142.75 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 149.996172505, "total_time": 600.150867 }, "output": "https://replicate.delivery/xezq/KVl3LEOuteyJcycUBQJ4s2b0wsVTkZTzwLjcvPSNgCNfnYIUA/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T18:09:44.576694Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-torf6scsc3sswzbni6vjvyvlrgpk2cthp4nsz6jvm346gvxx2jyq", "get": "https://api.replicate.com/v1/predictions/w0w0hvmj2srme0cmk5xam3pb2r", "cancel": "https://api.replicate.com/v1/predictions/w0w0hvmj2srme0cmk5xam3pb2r/cancel" }, "version": "a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e" }Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_1f80f0f5-cc18-4ee1-b325-05df5412a17b.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_1f80f0f5-cc18-4ee1-b325-05df5412a17b.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 139 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_1f80f0f5-cc18-4ee1-b325-05df5412a17b with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.29s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.13s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:40, 2.19s/it] [ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.23s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:38, 2.25s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.26s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:28, 2.28s/it] [ComfyUI] 24%|██▍ | 12/50 [00:26<01:26, 2.28s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:42<01:10, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it] [ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:54, 2.29s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:05<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:21<00:32, 2.29s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:37<00:16, 2.29s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:53<00:00, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:53<00:00, 2.28s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.344 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.90it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 63.98it/s] [ComfyUI] Prompt executed in 142.75 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4Prediction
deepfates/hunyuan-inception:a471cf82IDyvzx6a6tndrma0cmk5x9e5chr0StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NCPTN, NCPTN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }Install Replicate’s Node.js client library:npm install replicateImport and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NCPTN, NCPTN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicateImport the client:import replicateRun deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) # To access the file URL: print(output.url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output.read())To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictionsTo learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T18:14:39.988301Z", "created_at": "2025-01-24T18:02:33.003000Z", "data_removed": false, "error": null, "id": "yvzx6a6tndrma0cmk5x9e5chr0", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_93afc199-6397-44e4-b60e-f0d462393a3f.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_93afc199-6397-44e4-b60e-f0d462393a3f.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 140\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_93afc199-6397-44e4-b60e-f0d462393a3f with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.23s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:38, 2.25s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.26s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.28s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:26<01:26, 2.28s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:10, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:54, 2.29s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:05<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:21<00:32, 2.29s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:37<00:16, 2.29s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:53<00:00, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:53<00:00, 2.28s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.74it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.25it/s]\n[ComfyUI] Prompt executed in 141.63 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 145.314869561, "total_time": 726.985301 }, "output": "https://replicate.delivery/xezq/nCZeesJVM6g7Nkrx0iXBLpbDfw3NOB3hiaHMjT231lneoihQB/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T18:12:14.673432Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-7eanzfyytigmpo24hqucrvgxaczeaqu5fw3mlqjyqntjtzigumoa", "get": "https://api.replicate.com/v1/predictions/yvzx6a6tndrma0cmk5x9e5chr0", "cancel": "https://api.replicate.com/v1/predictions/yvzx6a6tndrma0cmk5x9e5chr0/cancel" }, "version": "a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e" }Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_93afc199-6397-44e4-b60e-f0d462393a3f.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_93afc199-6397-44e4-b60e-f0d462393a3f.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 140 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_93afc199-6397-44e4-b60e-f0d462393a3f with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.23s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:38, 2.25s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.26s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.28s/it] [ComfyUI] 24%|██▍ | 12/50 [00:26<01:26, 2.28s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:10, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it] [ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:54, 2.29s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:05<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:21<00:32, 2.29s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:37<00:16, 2.29s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:53<00:00, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:53<00:00, 2.28s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.74it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.25it/s] [ComfyUI] Prompt executed in 141.63 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4Prediction
deepfates/hunyuan-inception:a471cf82IDtd9a3k1xp5rm80cmk6mtsz8z74StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NCPTN, NCPTN The video clip depicts two men walking down a dimly lit, narrow hallway. The hallway appears to be part of an industrial or institutional building, as evidenced by the concrete walls and ceiling. The lighting is minimal, with a single light fixture visible on the ceiling, casting shadows on the walls and creating a somewhat eerie atmosphere. The man in the foreground is wearing a dark suit and tie, while the man behind him is dressed in a lighter-colored suit. Both men are walking in the same direction, with the man in the foreground slightly ahead. The hallway is lined with pipes and other utility conduits, adding to the industrial feel of the
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts two men walking down a dimly lit, narrow hallway. The hallway appears to be part of an industrial or institutional building, as evidenced by the concrete walls and ceiling. The lighting is minimal, with a single light fixture visible on the ceiling, casting shadows on the walls and creating a somewhat eerie atmosphere.\nThe man in the foreground is wearing a dark suit and tie, while the man behind him is dressed in a lighter-colored suit. Both men are walking in the same direction, with the man in the foreground slightly ahead. The hallway is lined with pipes and other utility conduits, adding to the industrial feel of the", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }Install Replicate’s Node.js client library:npm install replicateImport and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NCPTN, NCPTN The video clip depicts two men walking down a dimly lit, narrow hallway. The hallway appears to be part of an industrial or institutional building, as evidenced by the concrete walls and ceiling. The lighting is minimal, with a single light fixture visible on the ceiling, casting shadows on the walls and creating a somewhat eerie atmosphere.\nThe man in the foreground is wearing a dark suit and tie, while the man behind him is dressed in a lighter-colored suit. Both men are walking in the same direction, with the man in the foreground slightly ahead. The hallway is lined with pipes and other utility conduits, adding to the industrial feel of the", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicateImport the client:import replicateRun deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts two men walking down a dimly lit, narrow hallway. The hallway appears to be part of an industrial or institutional building, as evidenced by the concrete walls and ceiling. The lighting is minimal, with a single light fixture visible on the ceiling, casting shadows on the walls and creating a somewhat eerie atmosphere.\nThe man in the foreground is wearing a dark suit and tie, while the man behind him is dressed in a lighter-colored suit. Both men are walking in the same direction, with the man in the foreground slightly ahead. The hallway is lined with pipes and other utility conduits, adding to the industrial feel of the", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) # To access the file URL: print(output.url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output.read())To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts two men walking down a dimly lit, narrow hallway. The hallway appears to be part of an industrial or institutional building, as evidenced by the concrete walls and ceiling. The lighting is minimal, with a single light fixture visible on the ceiling, casting shadows on the walls and creating a somewhat eerie atmosphere.\\nThe man in the foreground is wearing a dark suit and tie, while the man behind him is dressed in a lighter-colored suit. Both men are walking in the same direction, with the man in the foreground slightly ahead. The hallway is lined with pipes and other utility conduits, adding to the industrial feel of the", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictionsTo learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T18:58:14.296861Z", "created_at": "2025-01-24T18:53:13.009000Z", "data_removed": false, "error": null, "id": "td9a3k1xp5rm80cmk6mtsz8z74", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts two men walking down a dimly lit, narrow hallway. The hallway appears to be part of an industrial or institutional building, as evidenced by the concrete walls and ceiling. The lighting is minimal, with a single light fixture visible on the ceiling, casting shadows on the walls and creating a somewhat eerie atmosphere.\nThe man in the foreground is wearing a dark suit and tie, while the man behind him is dressed in a lighter-colored suit. Both men are walking in the same direction, with the man in the foreground slightly ahead. The hallway is lined with pipes and other utility conduits, adding to the industrial feel of the", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_6b765284-9678-488c-a2af-0188df9aa998.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_6b765284-9678-488c-a2af-0188df9aa998.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 142\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_6b765284-9678-488c-a2af-0188df9aa998 with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:53, 2.31s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.06s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:42, 2.18s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.23s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.26s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:40, 2.28s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:38, 2.28s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:18<01:36, 2.29s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.29s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.30s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:11<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:34<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.48s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.26s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.43it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.52it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.06it/s]\n[ComfyUI] Prompt executed in 143.40 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 148.936705192, "total_time": 301.287861 }, "output": "https://replicate.delivery/xezq/3ymSzahVLQKcEtHw8Q3ViS8l71hLTr3WEM3HVjgMXLrxUGCF/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T18:55:45.360156Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-md766sby7b3wnas5hzmxz4wl3bajxke5bp6smbeba2bdpjlcwxaq", "get": "https://api.replicate.com/v1/predictions/td9a3k1xp5rm80cmk6mtsz8z74", "cancel": "https://api.replicate.com/v1/predictions/td9a3k1xp5rm80cmk6mtsz8z74/cancel" }, "version": "a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e" }Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_6b765284-9678-488c-a2af-0188df9aa998.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_6b765284-9678-488c-a2af-0188df9aa998.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 142 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_6b765284-9678-488c-a2af-0188df9aa998 with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:53, 2.31s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.06s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:42, 2.18s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.23s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.26s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:40, 2.28s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:38, 2.28s/it] [ComfyUI] 16%|█▌ | 8/50 [00:18<01:36, 2.29s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.29s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.30s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:11<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:34<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.48s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.26s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.43it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.52it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.06it/s] [ComfyUI] Prompt executed in 143.40 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4Prediction
deepfates/hunyuan-inception:a471cf82ID5wm5rr3mchrm80cmn4a8xdnfy8StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NCPTN, NCPTN The video clip depicts A face emerging from darkness as they step into a beam of light
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1.2
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts A face emerging from darkness as they step into a beam of light", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }Install Replicate’s Node.js client library:npm install replicateImport and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NCPTN, NCPTN The video clip depicts A face emerging from darkness as they step into a beam of light", frame_rate: 16, num_frames: 66, lora_strength: 1.2, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicateImport the client:import replicateRun deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts A face emerging from darkness as they step into a beam of light", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } ) # To access the file URL: print(output.url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output.read())To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts A face emerging from darkness as they step into a beam of light", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictionsTo learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-27T18:47:36.495567Z", "created_at": "2025-01-27T18:44:37.604000Z", "data_removed": false, "error": null, "id": "5wm5rr3mchrm80cmn4a8xdnfy8", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts A face emerging from darkness as they step into a beam of light", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_d0f0fbef-df13-43da-97a9-00f9279adfc2.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_d0f0fbef-df13-43da-97a9-00f9279adfc2.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\nExecuting node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.62it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.59it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.61it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.03it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 31\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 32\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_d0f0fbef-df13-43da-97a9-00f9279adfc2 with strength: 1.2\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<02:02, 2.50s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:40, 2.10s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:42, 2.19s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.23s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.26s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:40, 2.27s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:38, 2.28s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:18<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.760 GB\n[ComfyUI] Max allocated memory: max_memory=15.559 GB\n[ComfyUI] Max reserved memory: max_reserved=16.875 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.65s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.32s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.37s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.62it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.49it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 84.01it/s]\n[ComfyUI] Prompt executed in 148.93 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 157.633056457, "total_time": 178.891567 }, "output": "https://replicate.delivery/xezq/cL6EZZa0kQqqFtBwjxi3jWccSeT5IgBf5PUS1BChzr2IbYJUA/HunyuanVideo_00001.mp4", "started_at": "2025-01-27T18:44:58.862510Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-6lthz4zbrbtr3pt6btcftvlsu3lqp5wlu2znaddai4272ixyzfnq", "get": "https://api.replicate.com/v1/predictions/5wm5rr3mchrm80cmn4a8xdnfy8", "cancel": "https://api.replicate.com/v1/predictions/5wm5rr3mchrm80cmn4a8xdnfy8/cancel" }, "version": "a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e" }Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_d0f0fbef-df13-43da-97a9-00f9279adfc2.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_d0f0fbef-df13-43da-97a9-00f9279adfc2.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader Executing node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo [ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.62it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.59it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.61it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.03it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 31 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 32 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_d0f0fbef-df13-43da-97a9-00f9279adfc2 with strength: 1.2 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<02:02, 2.50s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:40, 2.10s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:42, 2.19s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.23s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.26s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:40, 2.27s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:38, 2.28s/it] [ComfyUI] 16%|█▌ | 8/50 [00:18<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.760 GB [ComfyUI] Max allocated memory: max_memory=15.559 GB [ComfyUI] Max reserved memory: max_reserved=16.875 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.65s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.32s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.37s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.62it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.49it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 84.01it/s] [ComfyUI] Prompt executed in 148.93 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4Prediction
deepfates/hunyuan-inception:a471cf82IDja4ycpd6f9rma0cmn4a8e579k0StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NCPTN, NCPTN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1.2
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }Install Replicate’s Node.js client library:npm install replicateImport and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NCPTN, NCPTN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", frame_rate: 16, num_frames: 66, lora_strength: 1.2, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicateImport the client:import replicateRun deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } ) # To access the file URL: print(output.url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output.read())To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-inception using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-inception:a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts A detective\'s weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictionsTo learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-27T18:50:05.419605Z", "created_at": "2025-01-27T18:44:50.426000Z", "data_removed": false, "error": null, "id": "ja4ycpd6f9rma0cmn4a8e579k0", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NCPTN, NCPTN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_dbdc967f-95ca-4ca2-b7c5-2cf0a3b11481.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_dbdc967f-95ca-4ca2-b7c5-2cf0a3b11481.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 42\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 42\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_dbdc967f-95ca-4ca2-b7c5-2cf0a3b11481 with strength: 1.2\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.30s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.93it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.55it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.03it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.66it/s]\n[ComfyUI] Prompt executed in 142.38 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 148.818382698, "total_time": 314.993605 }, "output": "https://replicate.delivery/xezq/5Mu9rBUYt5LYBFD21oQe2oW6x5Y0KgR91lcgAPVYV1uuOsEKA/HunyuanVideo_00001.mp4", "started_at": "2025-01-27T18:47:36.601222Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-kugpvz6n2jwwmxghycsfloete7kur54sgktl2xk6yimbda3qkxka", "get": "https://api.replicate.com/v1/predictions/ja4ycpd6f9rma0cmn4a8e579k0", "cancel": "https://api.replicate.com/v1/predictions/ja4ycpd6f9rma0cmn4a8e579k0/cancel" }, "version": "a471cf828b9f03ea639c745bbd27dc931e6a575dfbbffb2fa7cad6d71a0dab9e" }Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_dbdc967f-95ca-4ca2-b7c5-2cf0a3b11481.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_dbdc967f-95ca-4ca2-b7c5-2cf0a3b11481.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 42 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 42 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_dbdc967f-95ca-4ca2-b7c5-2cf0a3b11481 with strength: 1.2 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.30s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.30s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.93it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.55it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.03it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.66it/s] [ComfyUI] Prompt executed in 142.38 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Want to make some of these yourself?
Run this model