fofr / hunyuan-cyberpunk-mod
Hunyuan fine-tuned on Cyberpunk 2077 photorealistic graphics mods, use CYB77 keyword
Prediction
fofr/hunyuan-cyberpunk-mod:6095a5a5a4f81bccbf320e1a68051984c5a3c126495493a6c9656acd7e6d55c8IDpvk3t8qvmdrm80cm9ear40zbfmStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- crf
- 19
- steps
- 50
- width
- 854
- height
- 480
- prompt
- In the style of CYB77, first person view of a gunfight in a cyberpunk city
- lora_url
- flow_shift
- 9
- frame_rate
- 24
- num_frames
- 85
- force_offload
- lora_strength
- 1
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "steps": 50, "width": 854, "height": 480, "prompt": "In the style of CYB77, first person view of a gunfight in a cyberpunk city", "lora_url": "", "flow_shift": 9, "frame_rate": 24, "num_frames": 85, "force_offload": true, "lora_strength": 1, "guidance_scale": 6, "denoise_strength": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/hunyuan-cyberpunk-mod using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/hunyuan-cyberpunk-mod:6095a5a5a4f81bccbf320e1a68051984c5a3c126495493a6c9656acd7e6d55c8", { input: { crf: 19, steps: 50, width: 854, height: 480, prompt: "In the style of CYB77, first person view of a gunfight in a cyberpunk city", lora_url: "", flow_shift: 9, frame_rate: 24, num_frames: 85, force_offload: true, lora_strength: 1, guidance_scale: 6, denoise_strength: 1 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run fofr/hunyuan-cyberpunk-mod using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/hunyuan-cyberpunk-mod:6095a5a5a4f81bccbf320e1a68051984c5a3c126495493a6c9656acd7e6d55c8", input={ "crf": 19, "steps": 50, "width": 854, "height": 480, "prompt": "In the style of CYB77, first person view of a gunfight in a cyberpunk city", "lora_url": "", "flow_shift": 9, "frame_rate": 24, "num_frames": 85, "force_offload": True, "lora_strength": 1, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/hunyuan-cyberpunk-mod using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "fofr/hunyuan-cyberpunk-mod:6095a5a5a4f81bccbf320e1a68051984c5a3c126495493a6c9656acd7e6d55c8", "input": { "crf": 19, "steps": 50, "width": 854, "height": 480, "prompt": "In the style of CYB77, first person view of a gunfight in a cyberpunk city", "lora_url": "", "flow_shift": 9, "frame_rate": 24, "num_frames": 85, "force_offload": true, "lora_strength": 1, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-09T15:11:21.671066Z", "created_at": "2025-01-09T15:01:48.067000Z", "data_removed": false, "error": null, "id": "pvk3t8qvmdrm80cm9ear40zbfm", "input": { "crf": 19, "steps": 50, "width": 854, "height": 480, "prompt": "In the style of CYB77, first person view of a gunfight in a cyberpunk city", "lora_url": "", "flow_shift": 9, "frame_rate": 24, "num_frames": 85, "force_offload": true, "lora_strength": 1, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Random seed set to: 2729311902\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.69it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.74it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.77it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.22it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 21\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 22\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: lora_comfyui with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] Loading 1 new model\n[ComfyUI] loaded completely 0.0 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (480, 854, 85)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 85 frames in 22 latents at 864x480 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:07<05:46, 7.06s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:16<06:45, 8.44s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:25<06:57, 8.88s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:35<06:58, 9.09s/it]\n[ComfyUI] 10%|█ | 5/50 [00:44<06:54, 9.21s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:54<06:48, 9.28s/it]\n[ComfyUI] 14%|█▍ | 7/50 [01:03<06:40, 9.32s/it]\n[ComfyUI] 16%|█▌ | 8/50 [01:12<06:32, 9.34s/it]\n[ComfyUI] 18%|█▊ | 9/50 [01:22<06:23, 9.36s/it]\n[ComfyUI] 20%|██ | 10/50 [01:31<06:14, 9.37s/it]\n[ComfyUI] 22%|██▏ | 11/50 [01:41<06:05, 9.38s/it]\n[ComfyUI] 24%|██▍ | 12/50 [01:50<05:56, 9.38s/it]\n[ComfyUI] 26%|██▌ | 13/50 [01:59<05:47, 9.39s/it]\n[ComfyUI] 28%|██▊ | 14/50 [02:09<05:38, 9.39s/it]\n[ComfyUI] 30%|███ | 15/50 [02:18<05:28, 9.39s/it]\n[ComfyUI] 32%|███▏ | 16/50 [02:28<05:19, 9.40s/it]\n[ComfyUI] 34%|███▍ | 17/50 [02:37<05:10, 9.40s/it]\n[ComfyUI] 36%|███▌ | 18/50 [02:46<05:00, 9.40s/it]\n[ComfyUI] 38%|███▊ | 19/50 [02:56<04:51, 9.39s/it]\n[ComfyUI] 40%|████ | 20/50 [03:05<04:41, 9.39s/it]\n[ComfyUI] 42%|████▏ | 21/50 [03:15<04:32, 9.39s/it]\n[ComfyUI] 44%|████▍ | 22/50 [03:24<04:23, 9.39s/it]\n[ComfyUI] 46%|████▌ | 23/50 [03:33<04:13, 9.39s/it]\n[ComfyUI] 48%|████▊ | 24/50 [03:43<04:04, 9.39s/it]\n[ComfyUI] 50%|█████ | 25/50 [03:52<03:54, 9.39s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [04:02<03:45, 9.39s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [04:11<03:35, 9.39s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [04:20<03:26, 9.39s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [04:30<03:17, 9.39s/it]\n[ComfyUI] 60%|██████ | 30/50 [04:39<03:07, 9.39s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [04:48<02:58, 9.39s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [04:58<02:49, 9.39s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [05:07<02:39, 9.39s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [05:17<02:30, 9.39s/it]\n[ComfyUI] 70%|███████ | 35/50 [05:26<02:20, 9.39s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [05:35<02:11, 9.39s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [05:45<02:02, 9.39s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [05:54<01:52, 9.39s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [06:04<01:43, 9.39s/it]\n[ComfyUI] 80%|████████ | 40/50 [06:13<01:33, 9.39s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [06:22<01:24, 9.39s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [06:32<01:15, 9.39s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [06:41<01:05, 9.39s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [06:51<00:56, 9.39s/it]\n[ComfyUI] 90%|█████████ | 45/50 [07:00<00:46, 9.39s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [07:09<00:37, 9.39s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [07:19<00:28, 9.39s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [07:28<00:18, 9.39s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [07:37<00:09, 9.39s/it]\n[ComfyUI] 100%|██████████| 50/50 [07:47<00:00, 9.39s/it]\n[ComfyUI] 100%|██████████| 50/50 [07:47<00:00, 9.35s/it]\n[ComfyUI] Allocated memory: memory=12.762 GB\n[ComfyUI] Max allocated memory: max_memory=22.439 GB\n[ComfyUI] Max reserved memory: max_reserved=26.000 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:03, 1.97s/it]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:04<00:02, 2.04s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:04<00:00, 1.50s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:04<00:00, 1.64s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 36.46it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.12s/it]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.15s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.17it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.08it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 50.34it/s]\n[ComfyUI] Prompt executed in 507.93 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 24.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 512.155463706, "total_time": 573.604066 }, "output": "https://replicate.delivery/xezq/n6DhktRQrALkCJy2zA8Pk0DvMnud9OKjvZbjNtdtLZRGZ2AF/HunyuanVideo_00001.mp4", "started_at": "2025-01-09T15:02:49.515603Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-ycn5oaf54scflbxsmq57dxaju5sroixgnlzkhwmuadgfd6pmuj7a", "get": "https://api.replicate.com/v1/predictions/pvk3t8qvmdrm80cm9ear40zbfm", "cancel": "https://api.replicate.com/v1/predictions/pvk3t8qvmdrm80cm9ear40zbfm/cancel" }, "version": "6095a5a5a4f81bccbf320e1a68051984c5a3c126495493a6c9656acd7e6d55c8" }
Generated inRandom seed set to: 2729311902 Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.69it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.74it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.77it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.22it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 21 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 22 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: lora_comfyui with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] Loading 1 new model [ComfyUI] loaded completely 0.0 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (480, 854, 85) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 85 frames in 22 latents at 864x480 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:07<05:46, 7.06s/it] [ComfyUI] 4%|▍ | 2/50 [00:16<06:45, 8.44s/it] [ComfyUI] 6%|▌ | 3/50 [00:25<06:57, 8.88s/it] [ComfyUI] 8%|▊ | 4/50 [00:35<06:58, 9.09s/it] [ComfyUI] 10%|█ | 5/50 [00:44<06:54, 9.21s/it] [ComfyUI] 12%|█▏ | 6/50 [00:54<06:48, 9.28s/it] [ComfyUI] 14%|█▍ | 7/50 [01:03<06:40, 9.32s/it] [ComfyUI] 16%|█▌ | 8/50 [01:12<06:32, 9.34s/it] [ComfyUI] 18%|█▊ | 9/50 [01:22<06:23, 9.36s/it] [ComfyUI] 20%|██ | 10/50 [01:31<06:14, 9.37s/it] [ComfyUI] 22%|██▏ | 11/50 [01:41<06:05, 9.38s/it] [ComfyUI] 24%|██▍ | 12/50 [01:50<05:56, 9.38s/it] [ComfyUI] 26%|██▌ | 13/50 [01:59<05:47, 9.39s/it] [ComfyUI] 28%|██▊ | 14/50 [02:09<05:38, 9.39s/it] [ComfyUI] 30%|███ | 15/50 [02:18<05:28, 9.39s/it] [ComfyUI] 32%|███▏ | 16/50 [02:28<05:19, 9.40s/it] [ComfyUI] 34%|███▍ | 17/50 [02:37<05:10, 9.40s/it] [ComfyUI] 36%|███▌ | 18/50 [02:46<05:00, 9.40s/it] [ComfyUI] 38%|███▊ | 19/50 [02:56<04:51, 9.39s/it] [ComfyUI] 40%|████ | 20/50 [03:05<04:41, 9.39s/it] [ComfyUI] 42%|████▏ | 21/50 [03:15<04:32, 9.39s/it] [ComfyUI] 44%|████▍ | 22/50 [03:24<04:23, 9.39s/it] [ComfyUI] 46%|████▌ | 23/50 [03:33<04:13, 9.39s/it] [ComfyUI] 48%|████▊ | 24/50 [03:43<04:04, 9.39s/it] [ComfyUI] 50%|█████ | 25/50 [03:52<03:54, 9.39s/it] [ComfyUI] 52%|█████▏ | 26/50 [04:02<03:45, 9.39s/it] [ComfyUI] 54%|█████▍ | 27/50 [04:11<03:35, 9.39s/it] [ComfyUI] 56%|█████▌ | 28/50 [04:20<03:26, 9.39s/it] [ComfyUI] 58%|█████▊ | 29/50 [04:30<03:17, 9.39s/it] [ComfyUI] 60%|██████ | 30/50 [04:39<03:07, 9.39s/it] [ComfyUI] 62%|██████▏ | 31/50 [04:48<02:58, 9.39s/it] [ComfyUI] 64%|██████▍ | 32/50 [04:58<02:49, 9.39s/it] [ComfyUI] 66%|██████▌ | 33/50 [05:07<02:39, 9.39s/it] [ComfyUI] 68%|██████▊ | 34/50 [05:17<02:30, 9.39s/it] [ComfyUI] 70%|███████ | 35/50 [05:26<02:20, 9.39s/it] [ComfyUI] 72%|███████▏ | 36/50 [05:35<02:11, 9.39s/it] [ComfyUI] 74%|███████▍ | 37/50 [05:45<02:02, 9.39s/it] [ComfyUI] 76%|███████▌ | 38/50 [05:54<01:52, 9.39s/it] [ComfyUI] 78%|███████▊ | 39/50 [06:04<01:43, 9.39s/it] [ComfyUI] 80%|████████ | 40/50 [06:13<01:33, 9.39s/it] [ComfyUI] 82%|████████▏ | 41/50 [06:22<01:24, 9.39s/it] [ComfyUI] 84%|████████▍ | 42/50 [06:32<01:15, 9.39s/it] [ComfyUI] 86%|████████▌ | 43/50 [06:41<01:05, 9.39s/it] [ComfyUI] 88%|████████▊ | 44/50 [06:51<00:56, 9.39s/it] [ComfyUI] 90%|█████████ | 45/50 [07:00<00:46, 9.39s/it] [ComfyUI] 92%|█████████▏| 46/50 [07:09<00:37, 9.39s/it] [ComfyUI] 94%|█████████▍| 47/50 [07:19<00:28, 9.39s/it] [ComfyUI] 96%|█████████▌| 48/50 [07:28<00:18, 9.39s/it] [ComfyUI] 98%|█████████▊| 49/50 [07:37<00:09, 9.39s/it] [ComfyUI] 100%|██████████| 50/50 [07:47<00:00, 9.39s/it] [ComfyUI] 100%|██████████| 50/50 [07:47<00:00, 9.35s/it] [ComfyUI] Allocated memory: memory=12.762 GB [ComfyUI] Max allocated memory: max_memory=22.439 GB [ComfyUI] Max reserved memory: max_reserved=26.000 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:03, 1.97s/it] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:04<00:02, 2.04s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:04<00:00, 1.50s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:04<00:00, 1.64s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 36.46it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.12s/it] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.15s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.17it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.08it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 50.34it/s] [ComfyUI] Prompt executed in 507.93 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 24.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
fofr/hunyuan-cyberpunk-mod:6095a5a5a4f81bccbf320e1a68051984c5a3c126495493a6c9656acd7e6d55c8IDff83aynt41rm80cm9ear0nnk3wStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- crf
- 19
- steps
- 50
- width
- 854
- height
- 480
- prompt
- In the style of CYB77, riding a motorbike in first person view through a cyberpunk city at night
- lora_url
- flow_shift
- 9
- frame_rate
- 24
- num_frames
- 85
- force_offload
- lora_strength
- 1
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "steps": 50, "width": 854, "height": 480, "prompt": "In the style of CYB77, riding a motorbike in first person view through a cyberpunk city at night", "lora_url": "", "flow_shift": 9, "frame_rate": 24, "num_frames": 85, "force_offload": true, "lora_strength": 1, "guidance_scale": 6, "denoise_strength": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/hunyuan-cyberpunk-mod using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/hunyuan-cyberpunk-mod:6095a5a5a4f81bccbf320e1a68051984c5a3c126495493a6c9656acd7e6d55c8", { input: { crf: 19, steps: 50, width: 854, height: 480, prompt: "In the style of CYB77, riding a motorbike in first person view through a cyberpunk city at night", lora_url: "", flow_shift: 9, frame_rate: 24, num_frames: 85, force_offload: true, lora_strength: 1, guidance_scale: 6, denoise_strength: 1 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run fofr/hunyuan-cyberpunk-mod using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/hunyuan-cyberpunk-mod:6095a5a5a4f81bccbf320e1a68051984c5a3c126495493a6c9656acd7e6d55c8", input={ "crf": 19, "steps": 50, "width": 854, "height": 480, "prompt": "In the style of CYB77, riding a motorbike in first person view through a cyberpunk city at night", "lora_url": "", "flow_shift": 9, "frame_rate": 24, "num_frames": 85, "force_offload": True, "lora_strength": 1, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/hunyuan-cyberpunk-mod using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "fofr/hunyuan-cyberpunk-mod:6095a5a5a4f81bccbf320e1a68051984c5a3c126495493a6c9656acd7e6d55c8", "input": { "crf": 19, "steps": 50, "width": 854, "height": 480, "prompt": "In the style of CYB77, riding a motorbike in first person view through a cyberpunk city at night", "lora_url": "", "flow_shift": 9, "frame_rate": 24, "num_frames": 85, "force_offload": true, "lora_strength": 1, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-09T15:11:22.605411Z", "created_at": "2025-01-09T15:01:31.296000Z", "data_removed": false, "error": null, "id": "ff83aynt41rm80cm9ear0nnk3w", "input": { "crf": 19, "steps": 50, "width": 854, "height": 480, "prompt": "In the style of CYB77, riding a motorbike in first person view through a cyberpunk city at night", "lora_url": "", "flow_shift": 9, "frame_rate": 24, "num_frames": 85, "force_offload": true, "lora_strength": 1, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Random seed set to: 260703778\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.76it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.73it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.75it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.55it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.18it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 24\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 24\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: lora_comfyui with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] Loading 1 new model\n[ComfyUI] loaded completely 0.0 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (480, 854, 85)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 85 frames in 22 latents at 864x480 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:07<05:45, 7.06s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:16<06:45, 8.44s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:25<06:57, 8.89s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:35<06:58, 9.10s/it]\n[ComfyUI] 10%|█ | 5/50 [00:44<06:54, 9.21s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:54<06:48, 9.28s/it]\n[ComfyUI] 14%|█▍ | 7/50 [01:03<06:41, 9.33s/it]\n[ComfyUI] 16%|█▌ | 8/50 [01:12<06:33, 9.36s/it]\n[ComfyUI] 18%|█▊ | 9/50 [01:22<06:24, 9.38s/it]\n[ComfyUI] 20%|██ | 10/50 [01:31<06:15, 9.39s/it]\n[ComfyUI] 22%|██▏ | 11/50 [01:41<06:06, 9.40s/it]\n[ComfyUI] 24%|██▍ | 12/50 [01:50<05:57, 9.41s/it]\n[ComfyUI] 26%|██▌ | 13/50 [02:00<05:48, 9.41s/it]\n[ComfyUI] 28%|██▊ | 14/50 [02:09<05:38, 9.41s/it]\n[ComfyUI] 30%|███ | 15/50 [02:18<05:29, 9.42s/it]\n[ComfyUI] 32%|███▏ | 16/50 [02:28<05:20, 9.42s/it]\n[ComfyUI] 34%|███▍ | 17/50 [02:37<05:10, 9.42s/it]\n[ComfyUI] 36%|███▌ | 18/50 [02:47<05:01, 9.42s/it]\n[ComfyUI] 38%|███▊ | 19/50 [02:56<04:51, 9.42s/it]\n[ComfyUI] 40%|████ | 20/50 [03:06<04:42, 9.42s/it]\n[ComfyUI] 42%|████▏ | 21/50 [03:15<04:33, 9.42s/it]\n[ComfyUI] 44%|████▍ | 22/50 [03:24<04:23, 9.42s/it]\n[ComfyUI] 46%|████▌ | 23/50 [03:34<04:14, 9.42s/it]\n[ComfyUI] 48%|████▊ | 24/50 [03:43<04:04, 9.42s/it]\n[ComfyUI] 50%|█████ | 25/50 [03:53<03:55, 9.42s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [04:02<03:46, 9.42s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [04:11<03:36, 9.42s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [04:21<03:27, 9.42s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [04:30<03:17, 9.42s/it]\n[ComfyUI] 60%|██████ | 30/50 [04:40<03:08, 9.42s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [04:49<02:58, 9.42s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [04:59<02:49, 9.42s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [05:08<02:40, 9.42s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [05:17<02:30, 9.42s/it]\n[ComfyUI] 70%|███████ | 35/50 [05:27<02:21, 9.42s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [05:36<02:11, 9.42s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [05:46<02:02, 9.42s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [05:55<01:53, 9.42s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [06:05<01:43, 9.42s/it]\n[ComfyUI] 80%|████████ | 40/50 [06:14<01:34, 9.42s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [06:23<01:24, 9.42s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [06:33<01:15, 9.42s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [06:42<01:05, 9.42s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [06:52<00:56, 9.42s/it]\n[ComfyUI] 90%|█████████ | 45/50 [07:01<00:47, 9.42s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [07:10<00:37, 9.42s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [07:20<00:28, 9.42s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [07:29<00:18, 9.42s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [07:39<00:09, 9.42s/it]\n[ComfyUI] 100%|██████████| 50/50 [07:48<00:00, 9.42s/it]\n[ComfyUI] 100%|██████████| 50/50 [07:48<00:00, 9.37s/it]\n[ComfyUI] Allocated memory: memory=12.762 GB\n[ComfyUI] Max allocated memory: max_memory=22.439 GB\n[ComfyUI] Max reserved memory: max_reserved=26.000 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:03, 2.00s/it]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:04<00:02, 2.06s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:04<00:00, 1.52s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:04<00:00, 1.66s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 36.19it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.13s/it]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.17s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.16it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.06it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 49.96it/s]\n[ComfyUI] Prompt executed in 512.25 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 24.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 530.148973459, "total_time": 591.309411 }, "output": "https://replicate.delivery/xezq/kIHaNXieEmwiOayJjE93dTNj4t89bvUn0yvVfeGuRFb0IzGoA/HunyuanVideo_00001.mp4", "started_at": "2025-01-09T15:02:32.456437Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-ub6tyjdnpqpj3ar3ocs7m7xcwgr4mx2ibouiohyihlrjs22cqdgq", "get": "https://api.replicate.com/v1/predictions/ff83aynt41rm80cm9ear0nnk3w", "cancel": "https://api.replicate.com/v1/predictions/ff83aynt41rm80cm9ear0nnk3w/cancel" }, "version": "6095a5a5a4f81bccbf320e1a68051984c5a3c126495493a6c9656acd7e6d55c8" }
Generated inRandom seed set to: 260703778 Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader [ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.76it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.73it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.75it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.55it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.18it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 24 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 24 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: lora_comfyui with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] Loading 1 new model [ComfyUI] loaded completely 0.0 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (480, 854, 85) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 85 frames in 22 latents at 864x480 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:07<05:45, 7.06s/it] [ComfyUI] 4%|▍ | 2/50 [00:16<06:45, 8.44s/it] [ComfyUI] 6%|▌ | 3/50 [00:25<06:57, 8.89s/it] [ComfyUI] 8%|▊ | 4/50 [00:35<06:58, 9.10s/it] [ComfyUI] 10%|█ | 5/50 [00:44<06:54, 9.21s/it] [ComfyUI] 12%|█▏ | 6/50 [00:54<06:48, 9.28s/it] [ComfyUI] 14%|█▍ | 7/50 [01:03<06:41, 9.33s/it] [ComfyUI] 16%|█▌ | 8/50 [01:12<06:33, 9.36s/it] [ComfyUI] 18%|█▊ | 9/50 [01:22<06:24, 9.38s/it] [ComfyUI] 20%|██ | 10/50 [01:31<06:15, 9.39s/it] [ComfyUI] 22%|██▏ | 11/50 [01:41<06:06, 9.40s/it] [ComfyUI] 24%|██▍ | 12/50 [01:50<05:57, 9.41s/it] [ComfyUI] 26%|██▌ | 13/50 [02:00<05:48, 9.41s/it] [ComfyUI] 28%|██▊ | 14/50 [02:09<05:38, 9.41s/it] [ComfyUI] 30%|███ | 15/50 [02:18<05:29, 9.42s/it] [ComfyUI] 32%|███▏ | 16/50 [02:28<05:20, 9.42s/it] [ComfyUI] 34%|███▍ | 17/50 [02:37<05:10, 9.42s/it] [ComfyUI] 36%|███▌ | 18/50 [02:47<05:01, 9.42s/it] [ComfyUI] 38%|███▊ | 19/50 [02:56<04:51, 9.42s/it] [ComfyUI] 40%|████ | 20/50 [03:06<04:42, 9.42s/it] [ComfyUI] 42%|████▏ | 21/50 [03:15<04:33, 9.42s/it] [ComfyUI] 44%|████▍ | 22/50 [03:24<04:23, 9.42s/it] [ComfyUI] 46%|████▌ | 23/50 [03:34<04:14, 9.42s/it] [ComfyUI] 48%|████▊ | 24/50 [03:43<04:04, 9.42s/it] [ComfyUI] 50%|█████ | 25/50 [03:53<03:55, 9.42s/it] [ComfyUI] 52%|█████▏ | 26/50 [04:02<03:46, 9.42s/it] [ComfyUI] 54%|█████▍ | 27/50 [04:11<03:36, 9.42s/it] [ComfyUI] 56%|█████▌ | 28/50 [04:21<03:27, 9.42s/it] [ComfyUI] 58%|█████▊ | 29/50 [04:30<03:17, 9.42s/it] [ComfyUI] 60%|██████ | 30/50 [04:40<03:08, 9.42s/it] [ComfyUI] 62%|██████▏ | 31/50 [04:49<02:58, 9.42s/it] [ComfyUI] 64%|██████▍ | 32/50 [04:59<02:49, 9.42s/it] [ComfyUI] 66%|██████▌ | 33/50 [05:08<02:40, 9.42s/it] [ComfyUI] 68%|██████▊ | 34/50 [05:17<02:30, 9.42s/it] [ComfyUI] 70%|███████ | 35/50 [05:27<02:21, 9.42s/it] [ComfyUI] 72%|███████▏ | 36/50 [05:36<02:11, 9.42s/it] [ComfyUI] 74%|███████▍ | 37/50 [05:46<02:02, 9.42s/it] [ComfyUI] 76%|███████▌ | 38/50 [05:55<01:53, 9.42s/it] [ComfyUI] 78%|███████▊ | 39/50 [06:05<01:43, 9.42s/it] [ComfyUI] 80%|████████ | 40/50 [06:14<01:34, 9.42s/it] [ComfyUI] 82%|████████▏ | 41/50 [06:23<01:24, 9.42s/it] [ComfyUI] 84%|████████▍ | 42/50 [06:33<01:15, 9.42s/it] [ComfyUI] 86%|████████▌ | 43/50 [06:42<01:05, 9.42s/it] [ComfyUI] 88%|████████▊ | 44/50 [06:52<00:56, 9.42s/it] [ComfyUI] 90%|█████████ | 45/50 [07:01<00:47, 9.42s/it] [ComfyUI] 92%|█████████▏| 46/50 [07:10<00:37, 9.42s/it] [ComfyUI] 94%|█████████▍| 47/50 [07:20<00:28, 9.42s/it] [ComfyUI] 96%|█████████▌| 48/50 [07:29<00:18, 9.42s/it] [ComfyUI] 98%|█████████▊| 49/50 [07:39<00:09, 9.42s/it] [ComfyUI] 100%|██████████| 50/50 [07:48<00:00, 9.42s/it] [ComfyUI] 100%|██████████| 50/50 [07:48<00:00, 9.37s/it] [ComfyUI] Allocated memory: memory=12.762 GB [ComfyUI] Max allocated memory: max_memory=22.439 GB [ComfyUI] Max reserved memory: max_reserved=26.000 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:03, 2.00s/it] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:04<00:02, 2.06s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:04<00:00, 1.52s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:04<00:00, 1.66s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 36.19it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.13s/it] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.17s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.16it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.06it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 49.96it/s] [ComfyUI] Prompt executed in 512.25 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 24.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Want to make some of these yourself?
Run this model