lucataco
/
hunyuan-heygen-joshua
HunyuanVideo finetune of an AI Avatar from Heygen
- Public
- 80 runs
-
H100
- Fine-tune
Prediction
lucataco/hunyuan-heygen-joshua:e82a4154ID7rsyc7yzd9rme0cm9knrb739g8StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- crf
- 19
- steps
- 30
- width
- 960
- height
- 544
- prompt
- HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background includes large windows or glass doors, through which greenery and possibly other buildings can be seen, suggesting an urban or semi-urban setting. The lighting is natural, with sunlight streaming in, creating a bright and airy atmosphere. The man appears to be walking while speaking, as he is looking directly at the camera.
- lora_url
- flow_shift
- 9
- frame_rate
- 15
- num_frames
- 33
- force_offload
- lora_strength
- 0.9
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background includes large windows or glass doors, through which greenery and possibly other buildings can be seen, suggesting an urban or semi-urban setting. The lighting is natural, with sunlight streaming in, creating a bright and airy atmosphere. The man appears to be walking while speaking, as he is looking directly at the camera.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", { input: { crf: 19, steps: 30, width: 960, height: 544, prompt: "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background includes large windows or glass doors, through which greenery and possibly other buildings can be seen, suggesting an urban or semi-urban setting. The lighting is natural, with sunlight streaming in, creating a bright and airy atmosphere. The man appears to be walking while speaking, as he is looking directly at the camera.", lora_url: "", flow_shift: 9, frame_rate: 15, num_frames: 33, force_offload: true, lora_strength: 0.9, guidance_scale: 6, denoise_strength: 1 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", input={ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background includes large windows or glass doors, through which greenery and possibly other buildings can be seen, suggesting an urban or semi-urban setting. The lighting is natural, with sunlight streaming in, creating a bright and airy atmosphere. The man appears to be walking while speaking, as he is looking directly at the camera.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": True, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background includes large windows or glass doors, through which greenery and possibly other buildings can be seen, suggesting an urban or semi-urban setting. The lighting is natural, with sunlight streaming in, creating a bright and airy atmosphere. The man appears to be walking while speaking, as he is looking directly at the camera.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-09T21:18:02.635694Z", "created_at": "2025-01-09T21:15:14.154000Z", "data_removed": false, "error": null, "id": "7rsyc7yzd9rme0cm9knrb739g8", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background includes large windows or glass doors, through which greenery and possibly other buildings can be seen, suggesting an urban or semi-urban setting. The lighting is natural, with sunlight streaming in, creating a bright and airy atmosphere. The man appears to be walking while speaking, as he is looking directly at the camera.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Random seed set to: 3513631006\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 95\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\n[ComfyUI] Input (height, width, video_length) = (544, 960, 33)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:02<01:02, 2.14s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:05<01:13, 2.61s/it]\n[ComfyUI] 10%|█ | 3/30 [00:08<01:14, 2.76s/it]\n[ComfyUI] 13%|█▎ | 4/30 [00:10<01:13, 2.83s/it]\n[ComfyUI] 17%|█▋ | 5/30 [00:13<01:11, 2.87s/it]\n[ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.90s/it]\n[ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.91s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.92s/it]\n[ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it]\n[ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.94s/it]\n[ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.94s/it]\n[ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it]\n[ComfyUI] 43%|████▎ | 13/30 [00:37<00:50, 2.94s/it]\n[ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it]\n[ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.94s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it]\n[ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.95s/it]\n[ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.95s/it]\n[ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.94s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.94s/it]\n[ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.94s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [01:03<00:23, 2.94s/it]\n[ComfyUI] 77%|███████▋ | 23/30 [01:06<00:20, 2.94s/it]\n[ComfyUI] 80%|████████ | 24/30 [01:09<00:17, 2.94s/it]\n[ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.94s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.94s/it]\n[ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.94s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.94s/it]\n[ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it]\n[ComfyUI] Allocated memory: memory=12.299 GB\n[ComfyUI] Max allocated memory: max_memory=15.815 GB\n[ComfyUI] Max reserved memory: max_reserved=17.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.01s/it]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.09s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.06it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.02it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 67%|██████▋ | 2/3 [00:00<00:00, 18.21it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 23.33it/s]\n[ComfyUI] Prompt executed in 107.32 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 116.948723882, "total_time": 168.481694 }, "output": "https://replicate.delivery/xezq/QuchsT4tffmcrENPPEcms4IKLFrBvYqfX3Cufe8Ee4qvCv3AF/HunyuanVideo_00001.mp4", "started_at": "2025-01-09T21:16:05.686970Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-i3o7ross2lz3zq26ecfmbynizf55ouhanvrbfurwzfekf2abx4na", "get": "https://api.replicate.com/v1/predictions/7rsyc7yzd9rme0cm9knrb739g8", "cancel": "https://api.replicate.com/v1/predictions/7rsyc7yzd9rme0cm9knrb739g8/cancel" }, "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f" }
Generated inRandom seed set to: 3513631006 Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 95 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 [ComfyUI] Input (height, width, video_length) = (544, 960, 33) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:02<01:02, 2.14s/it] [ComfyUI] 7%|▋ | 2/30 [00:05<01:13, 2.61s/it] [ComfyUI] 10%|█ | 3/30 [00:08<01:14, 2.76s/it] [ComfyUI] 13%|█▎ | 4/30 [00:10<01:13, 2.83s/it] [ComfyUI] 17%|█▋ | 5/30 [00:13<01:11, 2.87s/it] [ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.90s/it] [ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.91s/it] [ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.92s/it] [ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it] [ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.94s/it] [ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.94s/it] [ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it] [ComfyUI] 43%|████▎ | 13/30 [00:37<00:50, 2.94s/it] [ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it] [ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.94s/it] [ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it] [ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.95s/it] [ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.95s/it] [ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.94s/it] [ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.94s/it] [ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.94s/it] [ComfyUI] 73%|███████▎ | 22/30 [01:03<00:23, 2.94s/it] [ComfyUI] 77%|███████▋ | 23/30 [01:06<00:20, 2.94s/it] [ComfyUI] 80%|████████ | 24/30 [01:09<00:17, 2.94s/it] [ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.94s/it] [ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.94s/it] [ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.94s/it] [ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.94s/it] [ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.94s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.94s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it] [ComfyUI] Allocated memory: memory=12.299 GB [ComfyUI] Max allocated memory: max_memory=15.815 GB [ComfyUI] Max reserved memory: max_reserved=17.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.01s/it] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.09s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.06it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.02it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 67%|██████▋ | 2/3 [00:00<00:00, 18.21it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 23.33it/s] [ComfyUI] Prompt executed in 107.32 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
lucataco/hunyuan-heygen-joshua:e82a4154IDkj8sypbxqnrm80cm9krrz08tg4StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- crf
- 19
- steps
- 30
- width
- 960
- height
- 544
- prompt
- HGM1 man standing outdoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background suggesting an urban street or semi-urban setting. He is walking while looking directly at the camera.
- lora_url
- flow_shift
- 9
- frame_rate
- 15
- num_frames
- 33
- force_offload
- lora_strength
- 0.8
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing outdoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background suggesting an urban street or semi-urban setting. He is walking while looking directly at the camera.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", { input: { crf: 19, steps: 30, width: 960, height: 544, prompt: "HGM1 man standing outdoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background suggesting an urban street or semi-urban setting. He is walking while looking directly at the camera.", lora_url: "", flow_shift: 9, frame_rate: 15, num_frames: 33, force_offload: true, lora_strength: 0.8, guidance_scale: 6, denoise_strength: 1 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", input={ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing outdoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background suggesting an urban street or semi-urban setting. He is walking while looking directly at the camera.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": True, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing outdoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background suggesting an urban street or semi-urban setting. He is walking while looking directly at the camera.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-09T21:23:50.553367Z", "created_at": "2025-01-09T21:21:22.365000Z", "data_removed": false, "error": null, "id": "kj8sypbxqnrm80cm9krrz08tg4", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing outdoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background suggesting an urban street or semi-urban setting. He is walking while looking directly at the camera.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Random seed set to: 3067296838\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.56it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.58it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.61it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 2.35it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 2.00it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 52\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 53\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: lora_comfyui with strength: 0.8\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] Loading 1 new model\n[ComfyUI] loaded completely 0.0 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (544, 960, 33)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:02<01:05, 2.26s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:05<01:14, 2.66s/it]\n[ComfyUI] 10%|█ | 3/30 [00:08<01:15, 2.79s/it]\n[ComfyUI] 13%|█▎ | 4/30 [00:11<01:14, 2.85s/it]\n[ComfyUI] 17%|█▋ | 5/30 [00:14<01:12, 2.89s/it]\n[ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.91s/it]\n[ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.92s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.92s/it]\n[ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it]\n[ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.93s/it]\n[ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.93s/it]\n[ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it]\n[ComfyUI] 43%|████▎ | 13/30 [00:37<00:49, 2.94s/it]\n[ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it]\n[ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.94s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it]\n[ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.94s/it]\n[ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.94s/it]\n[ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.94s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.94s/it]\n[ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.94s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [01:04<00:23, 2.94s/it]\n[ComfyUI] 77%|███████▋ | 23/30 [01:06<00:20, 2.94s/it]\n[ComfyUI] 80%|████████ | 24/30 [01:09<00:17, 2.94s/it]\n[ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.94s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.94s/it]\n[ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.94s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.94s/it]\n[ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it]\n[ComfyUI] Allocated memory: memory=12.758 GB\n[ComfyUI] Max allocated memory: max_memory=16.274 GB\n[ComfyUI] Max reserved memory: max_reserved=17.781 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.04s/it]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.10s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.05it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.01it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 27.03it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 27.00it/s]\n[ComfyUI] Prompt executed in 122.80 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 132.387405468, "total_time": 148.188367 }, "output": "https://replicate.delivery/xezq/qJuHqap7fAx0QSqZf3oZdzQHvdfX0gZjG8VdnfEEbaFZG8NQB/HunyuanVideo_00001.mp4", "started_at": "2025-01-09T21:21:38.165962Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-uhmd4ujfs74j3iv4bfuz4cth2xwmnmpf7535bfoy3rzu56c7iytq", "get": "https://api.replicate.com/v1/predictions/kj8sypbxqnrm80cm9krrz08tg4", "cancel": "https://api.replicate.com/v1/predictions/kj8sypbxqnrm80cm9krrz08tg4/cancel" }, "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f" }
Generated inRandom seed set to: 3067296838 Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.56it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.58it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.61it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 2.35it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 2.00it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 52 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 53 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: lora_comfyui with strength: 0.8 [ComfyUI] Requested to load HyVideoModel [ComfyUI] Loading 1 new model [ComfyUI] loaded completely 0.0 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (544, 960, 33) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:02<01:05, 2.26s/it] [ComfyUI] 7%|▋ | 2/30 [00:05<01:14, 2.66s/it] [ComfyUI] 10%|█ | 3/30 [00:08<01:15, 2.79s/it] [ComfyUI] 13%|█▎ | 4/30 [00:11<01:14, 2.85s/it] [ComfyUI] 17%|█▋ | 5/30 [00:14<01:12, 2.89s/it] [ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.91s/it] [ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.92s/it] [ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.92s/it] [ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it] [ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.93s/it] [ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.93s/it] [ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it] [ComfyUI] 43%|████▎ | 13/30 [00:37<00:49, 2.94s/it] [ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it] [ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.94s/it] [ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it] [ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.94s/it] [ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.94s/it] [ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.94s/it] [ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.94s/it] [ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.94s/it] [ComfyUI] 73%|███████▎ | 22/30 [01:04<00:23, 2.94s/it] [ComfyUI] 77%|███████▋ | 23/30 [01:06<00:20, 2.94s/it] [ComfyUI] 80%|████████ | 24/30 [01:09<00:17, 2.94s/it] [ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.94s/it] [ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.94s/it] [ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.94s/it] [ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.94s/it] [ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.94s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.94s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it] [ComfyUI] Allocated memory: memory=12.758 GB [ComfyUI] Max allocated memory: max_memory=16.274 GB [ComfyUI] Max reserved memory: max_reserved=17.781 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.04s/it] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.10s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.05it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.01it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 27.03it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 27.00it/s] [ComfyUI] Prompt executed in 122.80 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
lucataco/hunyuan-heygen-joshua:e82a4154ID3kve7qfawdrme0cm9ks83heydrStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- crf
- 19
- steps
- 30
- width
- 960
- height
- 544
- prompt
- HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The man is drinking coffee
- lora_url
- flow_shift
- 9
- frame_rate
- 15
- num_frames
- 33
- force_offload
- lora_strength
- 0.8
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The man is drinking coffee", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", { input: { crf: 19, steps: 30, width: 960, height: 544, prompt: "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The man is drinking coffee", lora_url: "", flow_shift: 9, frame_rate: 15, num_frames: 33, force_offload: true, lora_strength: 0.8, guidance_scale: 6, denoise_strength: 1 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", input={ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The man is drinking coffee", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": True, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The man is drinking coffee", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-09T21:24:57.183687Z", "created_at": "2025-01-09T21:22:55.843000Z", "data_removed": false, "error": null, "id": "3kve7qfawdrme0cm9ks83heydr", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The man is drinking coffee", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Random seed set to: 3931337327\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 35\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 36\n[ComfyUI] Input (height, width, video_length) = (544, 960, 33)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:02<01:02, 2.14s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:05<01:13, 2.62s/it]\n[ComfyUI] 10%|█ | 3/30 [00:08<01:14, 2.77s/it]\n[ComfyUI] 13%|█▎ | 4/30 [00:10<01:13, 2.84s/it]\n[ComfyUI] 17%|█▋ | 5/30 [00:13<01:11, 2.88s/it]\n[ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.90s/it]\n[ComfyUI] 23%|██▎ | 7/30 [00:19<01:06, 2.91s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.92s/it]\n[ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it]\n[ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.93s/it]\n[ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.93s/it]\n[ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it]\n[ComfyUI] 43%|████▎ | 13/30 [00:37<00:49, 2.94s/it]\n[ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it]\n[ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.94s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it]\n[ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.94s/it]\n[ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.94s/it]\n[ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.94s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.94s/it]\n[ComfyUI] 70%|███████ | 21/30 [01:00<00:26, 2.94s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [01:03<00:23, 2.94s/it]\n[ComfyUI] 77%|███████▋ | 23/30 [01:06<00:20, 2.94s/it]\n[ComfyUI] 80%|████████ | 24/30 [01:09<00:17, 2.94s/it]\n[ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.94s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.94s/it]\n[ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.94s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.94s/it]\n[ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.91s/it]\n[ComfyUI] Allocated memory: memory=12.299 GB\n[ComfyUI] Max allocated memory: max_memory=15.815 GB\n[ComfyUI] Max reserved memory: max_reserved=17.625 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:00<00:01, 1.00it/s]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.08s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.07it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.03it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 67%|██████▋ | 2/3 [00:00<00:00, 18.25it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 23.35it/s]\n[ComfyUI] Prompt executed in 105.92 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 117.059010205, "total_time": 121.340687 }, "output": "https://replicate.delivery/xezq/smD48Q5XpoJJA9EIZpi5KPX90GLO8g16BjHdzsMqEFVqw3AF/HunyuanVideo_00001.mp4", "started_at": "2025-01-09T21:23:00.124677Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-xhpsx4vcf4gabwnwtqzr3hq3chmx4hshusskz2izhsbtker5fp7q", "get": "https://api.replicate.com/v1/predictions/3kve7qfawdrme0cm9ks83heydr", "cancel": "https://api.replicate.com/v1/predictions/3kve7qfawdrme0cm9ks83heydr/cancel" }, "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f" }
Generated inRandom seed set to: 3931337327 Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 35 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 36 [ComfyUI] Input (height, width, video_length) = (544, 960, 33) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:02<01:02, 2.14s/it] [ComfyUI] 7%|▋ | 2/30 [00:05<01:13, 2.62s/it] [ComfyUI] 10%|█ | 3/30 [00:08<01:14, 2.77s/it] [ComfyUI] 13%|█▎ | 4/30 [00:10<01:13, 2.84s/it] [ComfyUI] 17%|█▋ | 5/30 [00:13<01:11, 2.88s/it] [ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.90s/it] [ComfyUI] 23%|██▎ | 7/30 [00:19<01:06, 2.91s/it] [ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.92s/it] [ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it] [ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.93s/it] [ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.93s/it] [ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it] [ComfyUI] 43%|████▎ | 13/30 [00:37<00:49, 2.94s/it] [ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it] [ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.94s/it] [ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it] [ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.94s/it] [ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.94s/it] [ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.94s/it] [ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.94s/it] [ComfyUI] 70%|███████ | 21/30 [01:00<00:26, 2.94s/it] [ComfyUI] 73%|███████▎ | 22/30 [01:03<00:23, 2.94s/it] [ComfyUI] 77%|███████▋ | 23/30 [01:06<00:20, 2.94s/it] [ComfyUI] 80%|████████ | 24/30 [01:09<00:17, 2.94s/it] [ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.94s/it] [ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.94s/it] [ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.94s/it] [ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.94s/it] [ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.94s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.94s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.91s/it] [ComfyUI] Allocated memory: memory=12.299 GB [ComfyUI] Max allocated memory: max_memory=15.815 GB [ComfyUI] Max reserved memory: max_reserved=17.625 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:00<00:01, 1.00it/s] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.08s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.07it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.03it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 67%|██████▋ | 2/3 [00:00<00:00, 18.25it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 23.35it/s] [ComfyUI] Prompt executed in 105.92 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
lucataco/hunyuan-heygen-joshua:e82a4154IDnmwaw4h0q1rmc0cm9m0ahtfk2wStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- crf
- 19
- steps
- 30
- width
- 960
- height
- 544
- prompt
- HGM1 man sitting indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. He is reading a book with a serious look on his face
- lora_url
- flow_shift
- 9
- frame_rate
- 15
- num_frames
- 33
- force_offload
- lora_strength
- 0.9
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man sitting indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. He is reading a book with a serious look on his face", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", { input: { crf: 19, steps: 30, width: 960, height: 544, prompt: "HGM1 man sitting indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. He is reading a book with a serious look on his face", lora_url: "", flow_shift: 9, frame_rate: 15, num_frames: 33, force_offload: true, lora_strength: 0.9, guidance_scale: 6, denoise_strength: 1 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", input={ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man sitting indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. He is reading a book with a serious look on his face", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": True, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man sitting indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. He is reading a book with a serious look on his face", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-09T21:40:40.680967Z", "created_at": "2025-01-09T21:37:21.592000Z", "data_removed": false, "error": null, "id": "nmwaw4h0q1rmc0cm9m0ahtfk2w", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man sitting indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. He is reading a book with a serious look on his face", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Random seed set to: 542482491\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.62it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.66it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.68it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.41it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.07it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 42\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 43\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: lora_comfyui with strength: 0.9\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] Loading 1 new model\n[ComfyUI] loaded completely 0.0 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (544, 960, 33)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:02<01:05, 2.27s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:05<01:14, 2.67s/it]\n[ComfyUI] 10%|█ | 3/30 [00:08<01:15, 2.79s/it]\n[ComfyUI] 13%|█▎ | 4/30 [00:11<01:14, 2.85s/it]\n[ComfyUI] 17%|█▋ | 5/30 [00:14<01:12, 2.89s/it]\n[ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.91s/it]\n[ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.92s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.93s/it]\n[ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it]\n[ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.94s/it]\n[ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.94s/it]\n[ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it]\n[ComfyUI] 43%|████▎ | 13/30 [00:37<00:50, 2.94s/it]\n[ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it]\n[ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.95s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it]\n[ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.95s/it]\n[ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.95s/it]\n[ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.95s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.95s/it]\n[ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.95s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [01:04<00:23, 2.95s/it]\n[ComfyUI] 77%|███████▋ | 23/30 [01:07<00:20, 2.95s/it]\n[ComfyUI] 80%|████████ | 24/30 [01:10<00:17, 2.95s/it]\n[ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.95s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.95s/it]\n[ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.94s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.94s/it]\n[ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it]\n[ComfyUI] Allocated memory: memory=12.758 GB\n[ComfyUI] Max allocated memory: max_memory=16.274 GB\n[ComfyUI] Max reserved memory: max_reserved=17.781 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.03s/it]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.09s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.05it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.02it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 27.22it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 27.17it/s]\n[ComfyUI] Prompt executed in 121.97 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 143.465169224, "total_time": 199.088967 }, "output": "https://replicate.delivery/xezq/KrsSZdZje4WYD65UT2NnavRoadmaYPYUOS6r65xf8nOYRfGoA/HunyuanVideo_00001.mp4", "started_at": "2025-01-09T21:38:17.215798Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-nkbgf3wccff5mcmykis4wytcmjuiwzty4yuxwnsnggfoejoh2qvq", "get": "https://api.replicate.com/v1/predictions/nmwaw4h0q1rmc0cm9m0ahtfk2w", "cancel": "https://api.replicate.com/v1/predictions/nmwaw4h0q1rmc0cm9m0ahtfk2w/cancel" }, "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f" }
Generated inRandom seed set to: 542482491 Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader [ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.62it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.66it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.68it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.41it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.07it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 42 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 43 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: lora_comfyui with strength: 0.9 [ComfyUI] Requested to load HyVideoModel [ComfyUI] Loading 1 new model [ComfyUI] loaded completely 0.0 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (544, 960, 33) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:02<01:05, 2.27s/it] [ComfyUI] 7%|▋ | 2/30 [00:05<01:14, 2.67s/it] [ComfyUI] 10%|█ | 3/30 [00:08<01:15, 2.79s/it] [ComfyUI] 13%|█▎ | 4/30 [00:11<01:14, 2.85s/it] [ComfyUI] 17%|█▋ | 5/30 [00:14<01:12, 2.89s/it] [ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.91s/it] [ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.92s/it] [ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.93s/it] [ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it] [ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.94s/it] [ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.94s/it] [ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it] [ComfyUI] 43%|████▎ | 13/30 [00:37<00:50, 2.94s/it] [ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it] [ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.95s/it] [ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it] [ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.95s/it] [ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.95s/it] [ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.95s/it] [ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.95s/it] [ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.95s/it] [ComfyUI] 73%|███████▎ | 22/30 [01:04<00:23, 2.95s/it] [ComfyUI] 77%|███████▋ | 23/30 [01:07<00:20, 2.95s/it] [ComfyUI] 80%|████████ | 24/30 [01:10<00:17, 2.95s/it] [ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.95s/it] [ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.95s/it] [ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.94s/it] [ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.94s/it] [ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.94s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.94s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it] [ComfyUI] Allocated memory: memory=12.758 GB [ComfyUI] Max allocated memory: max_memory=16.274 GB [ComfyUI] Max reserved memory: max_reserved=17.781 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.03s/it] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.09s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.05it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.02it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 27.22it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 27.17it/s] [ComfyUI] Prompt executed in 121.97 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
lucataco/hunyuan-heygen-joshua:e82a4154IDfhmfh7jez1rma0cm9m285a1eb4StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- crf
- 19
- steps
- 30
- width
- 960
- height
- 544
- prompt
- HGM1 man running outdoors. He is wearing a black puffy coat and running in Central Park New York
- lora_url
- flow_shift
- 9
- frame_rate
- 15
- num_frames
- 33
- force_offload
- lora_strength
- 0.9
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man running outdoors. He is wearing a black puffy coat and running in Central Park New York", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", { input: { crf: 19, steps: 30, width: 960, height: 544, prompt: "HGM1 man running outdoors. He is wearing a black puffy coat and running in Central Park New York", lora_url: "", flow_shift: 9, frame_rate: 15, num_frames: 33, force_offload: true, lora_strength: 0.9, guidance_scale: 6, denoise_strength: 1 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", input={ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man running outdoors. He is wearing a black puffy coat and running in Central Park New York", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": True, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man running outdoors. He is wearing a black puffy coat and running in Central Park New York", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-09T21:43:47.041747Z", "created_at": "2025-01-09T21:41:55.576000Z", "data_removed": false, "error": null, "id": "fhmfh7jez1rma0cm9m285a1eb4", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man running outdoors. He is wearing a black puffy coat and running in Central Park New York", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Random seed set to: 379293084\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 23\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 23\n[ComfyUI] Input (height, width, video_length) = (544, 960, 33)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:02<01:02, 2.14s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:05<01:13, 2.62s/it]\n[ComfyUI] 10%|█ | 3/30 [00:08<01:14, 2.77s/it]\n[ComfyUI] 13%|█▎ | 4/30 [00:10<01:13, 2.84s/it]\n[ComfyUI] 17%|█▋ | 5/30 [00:13<01:11, 2.88s/it]\n[ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.90s/it]\n[ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.91s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.92s/it]\n[ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it]\n[ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.93s/it]\n[ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.94s/it]\n[ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it]\n[ComfyUI] 43%|████▎ | 13/30 [00:37<00:50, 2.94s/it]\n[ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it]\n[ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.94s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it]\n[ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.95s/it]\n[ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.95s/it]\n[ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.95s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.95s/it]\n[ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.95s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [01:04<00:23, 2.95s/it]\n[ComfyUI] 77%|███████▋ | 23/30 [01:06<00:20, 2.95s/it]\n[ComfyUI] 80%|████████ | 24/30 [01:09<00:17, 2.95s/it]\n[ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.95s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.95s/it]\n[ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.95s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.95s/it]\n[ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.95s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.95s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it]\n[ComfyUI] Allocated memory: memory=12.299 GB\n[ComfyUI] Max allocated memory: max_memory=15.815 GB\n[ComfyUI] Max reserved memory: max_reserved=17.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.01s/it]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.09s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.06it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.02it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 67%|██████▋ | 2/3 [00:00<00:00, 18.18it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 23.29it/s]\n[ComfyUI] Prompt executed in 107.86 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 111.457088743, "total_time": 111.465747 }, "output": "https://replicate.delivery/xezq/ohU5wvBTF3abHRx8w1JVTwOXRDAGuXNWmv1gf5QreQoTUfGoA/HunyuanVideo_00001.mp4", "started_at": "2025-01-09T21:41:55.584658Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-rj2hfan4xuxaub3daavbfbteprtnw5vqupfml3us4bzi67hagcda", "get": "https://api.replicate.com/v1/predictions/fhmfh7jez1rma0cm9m285a1eb4", "cancel": "https://api.replicate.com/v1/predictions/fhmfh7jez1rma0cm9m285a1eb4/cancel" }, "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f" }
Generated inRandom seed set to: 379293084 Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 23 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 23 [ComfyUI] Input (height, width, video_length) = (544, 960, 33) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:02<01:02, 2.14s/it] [ComfyUI] 7%|▋ | 2/30 [00:05<01:13, 2.62s/it] [ComfyUI] 10%|█ | 3/30 [00:08<01:14, 2.77s/it] [ComfyUI] 13%|█▎ | 4/30 [00:10<01:13, 2.84s/it] [ComfyUI] 17%|█▋ | 5/30 [00:13<01:11, 2.88s/it] [ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.90s/it] [ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.91s/it] [ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.92s/it] [ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it] [ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.93s/it] [ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.94s/it] [ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it] [ComfyUI] 43%|████▎ | 13/30 [00:37<00:50, 2.94s/it] [ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it] [ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.94s/it] [ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it] [ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.95s/it] [ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.95s/it] [ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.95s/it] [ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.95s/it] [ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.95s/it] [ComfyUI] 73%|███████▎ | 22/30 [01:04<00:23, 2.95s/it] [ComfyUI] 77%|███████▋ | 23/30 [01:06<00:20, 2.95s/it] [ComfyUI] 80%|████████ | 24/30 [01:09<00:17, 2.95s/it] [ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.95s/it] [ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.95s/it] [ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.95s/it] [ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.95s/it] [ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.95s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.95s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it] [ComfyUI] Allocated memory: memory=12.299 GB [ComfyUI] Max allocated memory: max_memory=15.815 GB [ComfyUI] Max reserved memory: max_reserved=17.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.01s/it] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.09s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.06it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.02it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 67%|██████▋ | 2/3 [00:00<00:00, 18.18it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 23.29it/s] [ComfyUI] Prompt executed in 107.86 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
lucataco/hunyuan-heygen-joshua:e82a4154IDz1kfpbygjnrm80cm9m799ykcn0StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- crf
- 19
- steps
- 30
- width
- 960
- height
- 544
- prompt
- HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a classic, tailored suit. The background shows cobblestone streets and old buildings with ornate facades. The man appears to be speaking, as he is looking directly at the camera.
- lora_url
- flow_shift
- 9
- frame_rate
- 15
- num_frames
- 33
- force_offload
- lora_strength
- 0.8
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a classic, tailored suit. The background shows cobblestone streets and old buildings with ornate facades. The man appears to be speaking, as he is looking directly at the camera.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", { input: { crf: 19, steps: 30, width: 960, height: 544, prompt: "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a classic, tailored suit. The background shows cobblestone streets and old buildings with ornate facades. The man appears to be speaking, as he is looking directly at the camera.", lora_url: "", flow_shift: 9, frame_rate: 15, num_frames: 33, force_offload: true, lora_strength: 0.8, guidance_scale: 6, denoise_strength: 1 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", input={ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a classic, tailored suit. The background shows cobblestone streets and old buildings with ornate facades. The man appears to be speaking, as he is looking directly at the camera.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": True, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a classic, tailored suit. The background shows cobblestone streets and old buildings with ornate facades. The man appears to be speaking, as he is looking directly at the camera.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-09T21:55:21.443334Z", "created_at": "2025-01-09T21:53:24.117000Z", "data_removed": false, "error": null, "id": "z1kfpbygjnrm80cm9m799ykcn0", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a classic, tailored suit. The background shows cobblestone streets and old buildings with ornate facades. The man appears to be speaking, as he is looking directly at the camera.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Random seed set to: 1467522943\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 59\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 58\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: lora_comfyui with strength: 0.8\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] Loading 1 new model\n[ComfyUI] loaded completely 0.0 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (544, 960, 33)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:02<01:02, 2.14s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:05<01:13, 2.62s/it]\n[ComfyUI] 10%|█ | 3/30 [00:08<01:14, 2.77s/it]\n[ComfyUI] 13%|█▎ | 4/30 [00:11<01:13, 2.84s/it]\n[ComfyUI] 17%|█▋ | 5/30 [00:13<01:12, 2.89s/it]\n[ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.91s/it]\n[ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.93s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.94s/it]\n[ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.94s/it]\n[ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.95s/it]\n[ComfyUI] 37%|███▋ | 11/30 [00:31<00:56, 2.95s/it]\n[ComfyUI] 40%|████ | 12/30 [00:34<00:53, 2.96s/it]\n[ComfyUI] 43%|████▎ | 13/30 [00:37<00:50, 2.96s/it]\n[ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.95s/it]\n[ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.95s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.95s/it]\n[ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.95s/it]\n[ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.95s/it]\n[ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.95s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.95s/it]\n[ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.95s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [01:04<00:23, 2.95s/it]\n[ComfyUI] 77%|███████▋ | 23/30 [01:07<00:20, 2.95s/it]\n[ComfyUI] 80%|████████ | 24/30 [01:10<00:17, 2.95s/it]\n[ComfyUI] 83%|████████▎ | 25/30 [01:13<00:14, 2.95s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.95s/it]\n[ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.95s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.95s/it]\n[ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.95s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.95s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it]\n[ComfyUI] Allocated memory: memory=12.299 GB\n[ComfyUI] Max allocated memory: max_memory=15.815 GB\n[ComfyUI] Max reserved memory: max_reserved=17.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.01s/it]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.09s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.06it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.03it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 67%|██████▋ | 2/3 [00:00<00:00, 18.18it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 23.22it/s]\n[ComfyUI] Prompt executed in 113.75 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 117.317706085, "total_time": 117.326334 }, "output": "https://replicate.delivery/xezq/mO0tyy8XGGroNl1N7N0H3pq0XtvAvOsgFfrsjmGeUlMJfeNQB/HunyuanVideo_00001.mp4", "started_at": "2025-01-09T21:53:24.125628Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-w7r536mrdagvjvxubi2qhbo5jdosjin6mlb7lztonjvjnagijeoa", "get": "https://api.replicate.com/v1/predictions/z1kfpbygjnrm80cm9m799ykcn0", "cancel": "https://api.replicate.com/v1/predictions/z1kfpbygjnrm80cm9m799ykcn0/cancel" }, "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f" }
Generated inRandom seed set to: 1467522943 Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 59 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 58 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: lora_comfyui with strength: 0.8 [ComfyUI] Requested to load HyVideoModel [ComfyUI] Loading 1 new model [ComfyUI] loaded completely 0.0 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (544, 960, 33) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:02<01:02, 2.14s/it] [ComfyUI] 7%|▋ | 2/30 [00:05<01:13, 2.62s/it] [ComfyUI] 10%|█ | 3/30 [00:08<01:14, 2.77s/it] [ComfyUI] 13%|█▎ | 4/30 [00:11<01:13, 2.84s/it] [ComfyUI] 17%|█▋ | 5/30 [00:13<01:12, 2.89s/it] [ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.91s/it] [ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.93s/it] [ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.94s/it] [ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.94s/it] [ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.95s/it] [ComfyUI] 37%|███▋ | 11/30 [00:31<00:56, 2.95s/it] [ComfyUI] 40%|████ | 12/30 [00:34<00:53, 2.96s/it] [ComfyUI] 43%|████▎ | 13/30 [00:37<00:50, 2.96s/it] [ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.95s/it] [ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.95s/it] [ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.95s/it] [ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.95s/it] [ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.95s/it] [ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.95s/it] [ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.95s/it] [ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.95s/it] [ComfyUI] 73%|███████▎ | 22/30 [01:04<00:23, 2.95s/it] [ComfyUI] 77%|███████▋ | 23/30 [01:07<00:20, 2.95s/it] [ComfyUI] 80%|████████ | 24/30 [01:10<00:17, 2.95s/it] [ComfyUI] 83%|████████▎ | 25/30 [01:13<00:14, 2.95s/it] [ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.95s/it] [ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.95s/it] [ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.95s/it] [ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.95s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.95s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it] [ComfyUI] Allocated memory: memory=12.299 GB [ComfyUI] Max allocated memory: max_memory=15.815 GB [ComfyUI] Max reserved memory: max_reserved=17.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.01s/it] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.09s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.06it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.03it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 67%|██████▋ | 2/3 [00:00<00:00, 18.18it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 23.22it/s] [ComfyUI] Prompt executed in 113.75 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
lucataco/hunyuan-heygen-joshua:e82a4154IDgs18pa58fdrma0cm9m99atvwegStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- crf
- 19
- steps
- 30
- width
- 960
- height
- 544
- prompt
- HGM1 man sitting indoors, likely in a modern, well-lit space. He is wearing a blue, long-sleeved shirt. He is at a table, typing on a laptop
- lora_url
- flow_shift
- 9
- frame_rate
- 15
- num_frames
- 33
- force_offload
- lora_strength
- 0.8
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man sitting indoors, likely in a modern, well-lit space. He is wearing a blue, long-sleeved shirt. He is at a table, typing on a laptop", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", { input: { crf: 19, steps: 30, width: 960, height: 544, prompt: "HGM1 man sitting indoors, likely in a modern, well-lit space. He is wearing a blue, long-sleeved shirt. He is at a table, typing on a laptop", lora_url: "", flow_shift: 9, frame_rate: 15, num_frames: 33, force_offload: true, lora_strength: 0.8, guidance_scale: 6, denoise_strength: 1 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", input={ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man sitting indoors, likely in a modern, well-lit space. He is wearing a blue, long-sleeved shirt. He is at a table, typing on a laptop", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": True, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man sitting indoors, likely in a modern, well-lit space. He is wearing a blue, long-sleeved shirt. He is at a table, typing on a laptop", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-09T21:59:27.540053Z", "created_at": "2025-01-09T21:57:35.995000Z", "data_removed": false, "error": null, "id": "gs18pa58fdrma0cm9m99atvweg", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man sitting indoors, likely in a modern, well-lit space. He is wearing a blue, long-sleeved shirt. He is at a table, typing on a laptop", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.8, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Random seed set to: 3293102219\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 40\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 41\n[ComfyUI] Input (height, width, video_length) = (544, 960, 33)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:02<01:02, 2.14s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:05<01:13, 2.62s/it]\n[ComfyUI] 10%|█ | 3/30 [00:08<01:14, 2.77s/it]\n[ComfyUI] 13%|█▎ | 4/30 [00:10<01:13, 2.84s/it]\n[ComfyUI] 17%|█▋ | 5/30 [00:13<01:12, 2.89s/it]\n[ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.91s/it]\n[ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.92s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.93s/it]\n[ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it]\n[ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.94s/it]\n[ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.94s/it]\n[ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it]\n[ComfyUI] 43%|████▎ | 13/30 [00:37<00:50, 2.94s/it]\n[ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it]\n[ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.94s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it]\n[ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.95s/it]\n[ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.95s/it]\n[ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.95s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.95s/it]\n[ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.95s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [01:04<00:23, 2.95s/it]\n[ComfyUI] 77%|███████▋ | 23/30 [01:06<00:20, 2.95s/it]\n[ComfyUI] 80%|████████ | 24/30 [01:09<00:17, 2.94s/it]\n[ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.94s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.94s/it]\n[ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.94s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.94s/it]\n[ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it]\n[ComfyUI] Allocated memory: memory=12.299 GB\n[ComfyUI] Max allocated memory: max_memory=15.815 GB\n[ComfyUI] Max reserved memory: max_reserved=17.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.01s/it]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.10s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.06it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.02it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 67%|██████▋ | 2/3 [00:00<00:00, 18.19it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 23.24it/s]\n[ComfyUI] Prompt executed in 107.78 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 111.538390873, "total_time": 111.545053 }, "output": "https://replicate.delivery/xezq/mnKCXrr1BCb7DxJkvLLaX2KeUyZfnMPmqXE9q96kpksfFfNQB/HunyuanVideo_00001.mp4", "started_at": "2025-01-09T21:57:36.001662Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-yyz6zyuutbsgkkkrm3arbr5quygomz44ujhx4ugh3in7p55u4czq", "get": "https://api.replicate.com/v1/predictions/gs18pa58fdrma0cm9m99atvweg", "cancel": "https://api.replicate.com/v1/predictions/gs18pa58fdrma0cm9m99atvweg/cancel" }, "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f" }
Generated inRandom seed set to: 3293102219 Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 40 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 41 [ComfyUI] Input (height, width, video_length) = (544, 960, 33) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:02<01:02, 2.14s/it] [ComfyUI] 7%|▋ | 2/30 [00:05<01:13, 2.62s/it] [ComfyUI] 10%|█ | 3/30 [00:08<01:14, 2.77s/it] [ComfyUI] 13%|█▎ | 4/30 [00:10<01:13, 2.84s/it] [ComfyUI] 17%|█▋ | 5/30 [00:13<01:12, 2.89s/it] [ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.91s/it] [ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.92s/it] [ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.93s/it] [ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it] [ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.94s/it] [ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.94s/it] [ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it] [ComfyUI] 43%|████▎ | 13/30 [00:37<00:50, 2.94s/it] [ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it] [ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.94s/it] [ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it] [ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.95s/it] [ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.95s/it] [ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.95s/it] [ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.95s/it] [ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.95s/it] [ComfyUI] 73%|███████▎ | 22/30 [01:04<00:23, 2.95s/it] [ComfyUI] 77%|███████▋ | 23/30 [01:06<00:20, 2.95s/it] [ComfyUI] 80%|████████ | 24/30 [01:09<00:17, 2.94s/it] [ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.94s/it] [ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.94s/it] [ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.94s/it] [ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.94s/it] [ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.94s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.94s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it] [ComfyUI] Allocated memory: memory=12.299 GB [ComfyUI] Max allocated memory: max_memory=15.815 GB [ComfyUI] Max reserved memory: max_reserved=17.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.01s/it] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.10s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.06it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.02it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 67%|██████▋ | 2/3 [00:00<00:00, 18.19it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 23.24it/s] [ComfyUI] Prompt executed in 107.78 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
lucataco/hunyuan-heygen-joshua:e82a4154IDa4kt662a59rma0cm9mk9pk8jw0StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- crf
- 19
- steps
- 30
- width
- 960
- height
- 544
- prompt
- HGM1 man standing in a high-tech, futuristic city. He is wearing a sleek, metallic jumpsuit and augmented reality glasses. The background shows towering skyscrapers with neon lights and flying vehicles.
- lora_url
- flow_shift
- 9
- frame_rate
- 15
- num_frames
- 33
- force_offload
- lora_strength
- 0.9
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing in a high-tech, futuristic city. He is wearing a sleek, metallic jumpsuit and augmented reality glasses. The background shows towering skyscrapers with neon lights and flying vehicles.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", { input: { crf: 19, steps: 30, width: 960, height: 544, prompt: "HGM1 man standing in a high-tech, futuristic city. He is wearing a sleek, metallic jumpsuit and augmented reality glasses. The background shows towering skyscrapers with neon lights and flying vehicles.", lora_url: "", flow_shift: 9, frame_rate: 15, num_frames: 33, force_offload: true, lora_strength: 0.9, guidance_scale: 6, denoise_strength: 1 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/hunyuan-heygen-joshua:e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", input={ "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing in a high-tech, futuristic city. He is wearing a sleek, metallic jumpsuit and augmented reality glasses. The background shows towering skyscrapers with neon lights and flying vehicles.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": True, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/hunyuan-heygen-joshua using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing in a high-tech, futuristic city. He is wearing a sleek, metallic jumpsuit and augmented reality glasses. The background shows towering skyscrapers with neon lights and flying vehicles.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-09T22:21:19.366546Z", "created_at": "2025-01-09T22:19:02.570000Z", "data_removed": false, "error": null, "id": "a4kt662a59rma0cm9mk9pk8jw0", "input": { "crf": 19, "steps": 30, "width": 960, "height": 544, "prompt": "HGM1 man standing in a high-tech, futuristic city. He is wearing a sleek, metallic jumpsuit and augmented reality glasses. The background shows towering skyscrapers with neon lights and flying vehicles.", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 33, "force_offload": true, "lora_strength": 0.9, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Random seed set to: 1723331474\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\n[ComfyUI] Input (height, width, video_length) = (544, 960, 33)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:02<01:01, 2.14s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:05<01:13, 2.61s/it]\n[ComfyUI] 10%|█ | 3/30 [00:08<01:14, 2.76s/it]\n[ComfyUI] 13%|█▎ | 4/30 [00:10<01:13, 2.84s/it]\n[ComfyUI] 17%|█▋ | 5/30 [00:13<01:11, 2.88s/it]\n[ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.90s/it]\n[ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.91s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.92s/it]\n[ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it]\n[ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.94s/it]\n[ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.94s/it]\n[ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it]\n[ComfyUI] 43%|████▎ | 13/30 [00:37<00:50, 2.94s/it]\n[ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it]\n[ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.94s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it]\n[ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.94s/it]\n[ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.94s/it]\n[ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.95s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.95s/it]\n[ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.95s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [01:03<00:23, 2.95s/it]\n[ComfyUI] 77%|███████▋ | 23/30 [01:06<00:20, 2.94s/it]\n[ComfyUI] 80%|████████ | 24/30 [01:09<00:17, 2.94s/it]\n[ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.94s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.94s/it]\n[ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.94s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.94s/it]\n[ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.94s/it]\n[ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it]\n[ComfyUI] Allocated memory: memory=12.299 GB\n[ComfyUI] Max allocated memory: max_memory=15.815 GB\n[ComfyUI] Max reserved memory: max_reserved=17.312 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.01s/it]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.09s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.06it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.02it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 67%|██████▋ | 2/3 [00:00<00:00, 18.13it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 23.19it/s]\n[ComfyUI] Prompt executed in 96.40 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 105.327011825, "total_time": 136.796546 }, "output": "https://replicate.delivery/xezq/LtSaQxXcxYZhLpZuXvqw4zAlvSdf0CGZHkjPYr9Oo2rv7vBKA/HunyuanVideo_00001.mp4", "started_at": "2025-01-09T22:19:34.039534Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-tpprufuwhmpjr6cb4d2shi57khmlpbs5solh2q3cyyfcsnnyz6oq", "get": "https://api.replicate.com/v1/predictions/a4kt662a59rma0cm9mk9pk8jw0", "cancel": "https://api.replicate.com/v1/predictions/a4kt662a59rma0cm9mk9pk8jw0/cancel" }, "version": "e82a415498ba81104f2c1f41833609e546cd53d2734b37740d685b122c93053f" }
Generated inRandom seed set to: 1723331474 Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt [ComfyUI] Input (height, width, video_length) = (544, 960, 33) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 33 frames in 9 latents at 960x544 with 30 inference steps [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:02<01:01, 2.14s/it] [ComfyUI] 7%|▋ | 2/30 [00:05<01:13, 2.61s/it] [ComfyUI] 10%|█ | 3/30 [00:08<01:14, 2.76s/it] [ComfyUI] 13%|█▎ | 4/30 [00:10<01:13, 2.84s/it] [ComfyUI] 17%|█▋ | 5/30 [00:13<01:11, 2.88s/it] [ComfyUI] 20%|██ | 6/30 [00:16<01:09, 2.90s/it] [ComfyUI] 23%|██▎ | 7/30 [00:19<01:07, 2.91s/it] [ComfyUI] 27%|██▋ | 8/30 [00:22<01:04, 2.92s/it] [ComfyUI] 30%|███ | 9/30 [00:25<01:01, 2.93s/it] [ComfyUI] 33%|███▎ | 10/30 [00:28<00:58, 2.94s/it] [ComfyUI] 37%|███▋ | 11/30 [00:31<00:55, 2.94s/it] [ComfyUI] 40%|████ | 12/30 [00:34<00:52, 2.94s/it] [ComfyUI] 43%|████▎ | 13/30 [00:37<00:50, 2.94s/it] [ComfyUI] 47%|████▋ | 14/30 [00:40<00:47, 2.94s/it] [ComfyUI] 50%|█████ | 15/30 [00:43<00:44, 2.94s/it] [ComfyUI] 53%|█████▎ | 16/30 [00:46<00:41, 2.94s/it] [ComfyUI] 57%|█████▋ | 17/30 [00:49<00:38, 2.94s/it] [ComfyUI] 60%|██████ | 18/30 [00:52<00:35, 2.94s/it] [ComfyUI] 63%|██████▎ | 19/30 [00:55<00:32, 2.95s/it] [ComfyUI] 67%|██████▋ | 20/30 [00:58<00:29, 2.95s/it] [ComfyUI] 70%|███████ | 21/30 [01:01<00:26, 2.95s/it] [ComfyUI] 73%|███████▎ | 22/30 [01:03<00:23, 2.95s/it] [ComfyUI] 77%|███████▋ | 23/30 [01:06<00:20, 2.94s/it] [ComfyUI] 80%|████████ | 24/30 [01:09<00:17, 2.94s/it] [ComfyUI] 83%|████████▎ | 25/30 [01:12<00:14, 2.94s/it] [ComfyUI] 87%|████████▋ | 26/30 [01:15<00:11, 2.94s/it] [ComfyUI] 90%|█████████ | 27/30 [01:18<00:08, 2.94s/it] [ComfyUI] 93%|█████████▎| 28/30 [01:21<00:05, 2.94s/it] [ComfyUI] 97%|█████████▋| 29/30 [01:24<00:02, 2.94s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.94s/it] [ComfyUI] 100%|██████████| 30/30 [01:27<00:00, 2.92s/it] [ComfyUI] Allocated memory: memory=12.299 GB [ComfyUI] Max allocated memory: max_memory=15.815 GB [ComfyUI] Max reserved memory: max_reserved=17.312 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.01s/it] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.09s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.06it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.02it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 67%|██████▋ | 2/3 [00:00<00:00, 18.13it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 23.19it/s] [ComfyUI] Prompt executed in 96.40 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Want to make some of these yourself?
Run this model