deepfates
/
hunyuan-arcane
Hunyuan-Video model finetuned on Arcane (2021). Trigger word is "RCN". Use "A video in the style of RCN, RCN" at the beginning of your prompt for best results.
- Public
- 352 runs
-
H100
- Fine-tune
Prediction
deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40badID99pekzqgpnrme0cmk68tvqkpsgStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of RCN, RCN The video clip features a large owl perched on a metal stand. The owl is positioned in the foreground, with its body facing slightly to the left. The owl's feathers are predominantly dark brown with lighter brown and white streaks, giving it a mottled appearance. Its eyes are large and round, with a piercing gaze that seems to be directed towards something off-camera. The background is dark and somewhat blurred, suggesting a nighttime setting. The owl is set against a backdrop of a concrete wall or structure, which appears to be part of a larger, possibly industrial or urban environment. The wall has a rough texture and is illuminated by
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip features a large owl perched on a metal stand. The owl is positioned in the foreground, with its body facing slightly to the left. The owl's feathers are predominantly dark brown with lighter brown and white streaks, giving it a mottled appearance. Its eyes are large and round, with a piercing gaze that seems to be directed towards something off-camera.\nThe background is dark and somewhat blurred, suggesting a nighttime setting. The owl is set against a backdrop of a concrete wall or structure, which appears to be part of a larger, possibly industrial or urban environment. The wall has a rough texture and is illuminated by", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of RCN, RCN The video clip features a large owl perched on a metal stand. The owl is positioned in the foreground, with its body facing slightly to the left. The owl's feathers are predominantly dark brown with lighter brown and white streaks, giving it a mottled appearance. Its eyes are large and round, with a piercing gaze that seems to be directed towards something off-camera.\nThe background is dark and somewhat blurred, suggesting a nighttime setting. The owl is set against a backdrop of a concrete wall or structure, which appears to be part of a larger, possibly industrial or urban environment. The wall has a rough texture and is illuminated by", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip features a large owl perched on a metal stand. The owl is positioned in the foreground, with its body facing slightly to the left. The owl's feathers are predominantly dark brown with lighter brown and white streaks, giving it a mottled appearance. Its eyes are large and round, with a piercing gaze that seems to be directed towards something off-camera.\nThe background is dark and somewhat blurred, suggesting a nighttime setting. The owl is set against a backdrop of a concrete wall or structure, which appears to be part of a larger, possibly industrial or urban environment. The wall has a rough texture and is illuminated by", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip features a large owl perched on a metal stand. The owl is positioned in the foreground, with its body facing slightly to the left. The owl\'s feathers are predominantly dark brown with lighter brown and white streaks, giving it a mottled appearance. Its eyes are large and round, with a piercing gaze that seems to be directed towards something off-camera.\\nThe background is dark and somewhat blurred, suggesting a nighttime setting. The owl is set against a backdrop of a concrete wall or structure, which appears to be part of a larger, possibly industrial or urban environment. The wall has a rough texture and is illuminated by", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T18:34:14.937140Z", "created_at": "2025-01-24T18:27:45.973000Z", "data_removed": false, "error": null, "id": "99pekzqgpnrme0cmk68tvqkpsg", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip features a large owl perched on a metal stand. The owl is positioned in the foreground, with its body facing slightly to the left. The owl's feathers are predominantly dark brown with lighter brown and white streaks, giving it a mottled appearance. Its eyes are large and round, with a piercing gaze that seems to be directed towards something off-camera.\nThe background is dark and somewhat blurred, suggesting a nighttime setting. The owl is set against a backdrop of a concrete wall or structure, which appears to be part of a larger, possibly industrial or urban environment. The wall has a rough texture and is illuminated by", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_77ccfd46-11dd-4414-8eae-abd3a5fa0718.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_77ccfd46-11dd-4414-8eae-abd3a5fa0718.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\nExecuting node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:02, 1.11it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.04it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:02<00:00, 1.07it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.65it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.38it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 140\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_77ccfd46-11dd-4414-8eae-abd3a5fa0718 with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.42s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.06s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.17s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.22s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:18<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.29s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.29s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.29s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.760 GB\n[ComfyUI] Max allocated memory: max_memory=15.559 GB\n[ComfyUI] Max reserved memory: max_reserved=16.875 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.46it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.48it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 83.94it/s]\n[ComfyUI] Prompt executed in 149.88 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 151.778480336, "total_time": 388.96414 }, "output": "https://replicate.delivery/xezq/AUeKubR9yfuAHEgHGQjT8dvMzevVaYSWL2ZOhofkFu3byjhQB/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T18:31:43.158660Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-2z4x2xz7uw22ockac5wwolwnrtqwwpmdddgrgewo5s3fzaojpzca", "get": "https://api.replicate.com/v1/predictions/99pekzqgpnrme0cmk68tvqkpsg", "cancel": "https://api.replicate.com/v1/predictions/99pekzqgpnrme0cmk68tvqkpsg/cancel" }, "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_77ccfd46-11dd-4414-8eae-abd3a5fa0718.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_77ccfd46-11dd-4414-8eae-abd3a5fa0718.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader Executing node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:02, 1.11it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.04it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:02<00:00, 1.07it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.65it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.38it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 140 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_77ccfd46-11dd-4414-8eae-abd3a5fa0718 with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.42s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.06s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.17s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.22s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:18<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.29s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it] [ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.29s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.29s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.760 GB [ComfyUI] Max allocated memory: max_memory=15.559 GB [ComfyUI] Max reserved memory: max_reserved=16.875 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.46it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.48it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 83.94it/s] [ComfyUI] Prompt executed in 149.88 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40badIDew02qvphj5rma0cmk6hr3q60w4StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of RCN, RCN The video clip depicts a dimly lit jazz club with a cozy, intimate atmosphere. The stage is set with a piano and a saxophone player, both engrossed in their performance. The saxophonist is positioned in the center, playing with a focused expression, while the pianist sits at the back, immersed in his music. The club is filled with patrons seated at tables, engaged in conversation and enjoying the live music. The audience appears to be a mix of men and women, dressed in casual to semi-formal attire. The lighting is warm and subdued, with a neon sign on the right side of the stage reading
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts a dimly lit jazz club with a cozy, intimate atmosphere. The stage is set with a piano and a saxophone player, both engrossed in their performance. The saxophonist is positioned in the center, playing with a focused expression, while the pianist sits at the back, immersed in his music. The club is filled with patrons seated at tables, engaged in conversation and enjoying the live music. The audience appears to be a mix of men and women, dressed in casual to semi-formal attire. The lighting is warm and subdued, with a neon sign on the right side of the stage reading ", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of RCN, RCN The video clip depicts a dimly lit jazz club with a cozy, intimate atmosphere. The stage is set with a piano and a saxophone player, both engrossed in their performance. The saxophonist is positioned in the center, playing with a focused expression, while the pianist sits at the back, immersed in his music. The club is filled with patrons seated at tables, engaged in conversation and enjoying the live music. The audience appears to be a mix of men and women, dressed in casual to semi-formal attire. The lighting is warm and subdued, with a neon sign on the right side of the stage reading ", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts a dimly lit jazz club with a cozy, intimate atmosphere. The stage is set with a piano and a saxophone player, both engrossed in their performance. The saxophonist is positioned in the center, playing with a focused expression, while the pianist sits at the back, immersed in his music. The club is filled with patrons seated at tables, engaged in conversation and enjoying the live music. The audience appears to be a mix of men and women, dressed in casual to semi-formal attire. The lighting is warm and subdued, with a neon sign on the right side of the stage reading ", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts a dimly lit jazz club with a cozy, intimate atmosphere. The stage is set with a piano and a saxophone player, both engrossed in their performance. The saxophonist is positioned in the center, playing with a focused expression, while the pianist sits at the back, immersed in his music. The club is filled with patrons seated at tables, engaged in conversation and enjoying the live music. The audience appears to be a mix of men and women, dressed in casual to semi-formal attire. The lighting is warm and subdued, with a neon sign on the right side of the stage reading ", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T18:50:49.956647Z", "created_at": "2025-01-24T18:47:17.649000Z", "data_removed": false, "error": null, "id": "ew02qvphj5rma0cmk6hr3q60w4", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts a dimly lit jazz club with a cozy, intimate atmosphere. The stage is set with a piano and a saxophone player, both engrossed in their performance. The saxophonist is positioned in the center, playing with a focused expression, while the pianist sits at the back, immersed in his music. The club is filled with patrons seated at tables, engaged in conversation and enjoying the live music. The audience appears to be a mix of men and women, dressed in casual to semi-formal attire. The lighting is warm and subdued, with a neon sign on the right side of the stage reading ", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_4876d6ab-a72f-4081-8180-a29e2d0804db.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_4876d6ab-a72f-4081-8180-a29e2d0804db.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\nExecuting node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.70it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.60it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.66it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.40it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.06it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 140\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_4876d6ab-a72f-4081-8180-a29e2d0804db with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.42s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.07s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.17s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.22s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:18<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.29s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.760 GB\n[ComfyUI] Max allocated memory: max_memory=15.559 GB\n[ComfyUI] Max reserved memory: max_reserved=16.875 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.26s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 29.03it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.48it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 85.15it/s]\n[ComfyUI] Prompt executed in 147.91 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 154.183222987, "total_time": 212.307647 }, "output": "https://replicate.delivery/xezq/SPypkTwBqIbJMZfThIwZ0aQsMBBqiEBtRnofocQBb60JMZIUA/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T18:48:15.773424Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-pzsar5rk2gxudjcv6qknmvv7kl4ya7fj5bv5kk64ecwzhbliwmaq", "get": "https://api.replicate.com/v1/predictions/ew02qvphj5rma0cmk6hr3q60w4", "cancel": "https://api.replicate.com/v1/predictions/ew02qvphj5rma0cmk6hr3q60w4/cancel" }, "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_4876d6ab-a72f-4081-8180-a29e2d0804db.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_4876d6ab-a72f-4081-8180-a29e2d0804db.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader Executing node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.70it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.60it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.66it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.40it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.06it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 140 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_4876d6ab-a72f-4081-8180-a29e2d0804db with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])[ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.42s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.07s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.17s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.22s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:18<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.29s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.760 GB [ComfyUI] Max allocated memory: max_memory=15.559 GB [ComfyUI] Max reserved memory: max_reserved=16.875 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.26s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 29.03it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.48it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 85.15it/s] [ComfyUI] Prompt executed in 147.91 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40badID7qws6x9nb5rm80cmk6mr5v5w0gStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of RCN, RCN The video clip depicts two men walking down a dimly lit, narrow hallway. The hallway appears to be part of an industrial or institutional building, as evidenced by the concrete walls and ceiling. The lighting is minimal, with a single light fixture visible on the ceiling, casting shadows on the walls and creating a somewhat eerie atmosphere. The man in the foreground is wearing a dark suit and tie, while the man behind him is dressed in a lighter-colored suit. Both men are walking in the same direction, with the man in the foreground slightly ahead. The hallway is lined with pipes and other utility conduits, adding to the industrial feel of the
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts two men walking down a dimly lit, narrow hallway. The hallway appears to be part of an industrial or institutional building, as evidenced by the concrete walls and ceiling. The lighting is minimal, with a single light fixture visible on the ceiling, casting shadows on the walls and creating a somewhat eerie atmosphere.\nThe man in the foreground is wearing a dark suit and tie, while the man behind him is dressed in a lighter-colored suit. Both men are walking in the same direction, with the man in the foreground slightly ahead. The hallway is lined with pipes and other utility conduits, adding to the industrial feel of the", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of RCN, RCN The video clip depicts two men walking down a dimly lit, narrow hallway. The hallway appears to be part of an industrial or institutional building, as evidenced by the concrete walls and ceiling. The lighting is minimal, with a single light fixture visible on the ceiling, casting shadows on the walls and creating a somewhat eerie atmosphere.\nThe man in the foreground is wearing a dark suit and tie, while the man behind him is dressed in a lighter-colored suit. Both men are walking in the same direction, with the man in the foreground slightly ahead. The hallway is lined with pipes and other utility conduits, adding to the industrial feel of the", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts two men walking down a dimly lit, narrow hallway. The hallway appears to be part of an industrial or institutional building, as evidenced by the concrete walls and ceiling. The lighting is minimal, with a single light fixture visible on the ceiling, casting shadows on the walls and creating a somewhat eerie atmosphere.\nThe man in the foreground is wearing a dark suit and tie, while the man behind him is dressed in a lighter-colored suit. Both men are walking in the same direction, with the man in the foreground slightly ahead. The hallway is lined with pipes and other utility conduits, adding to the industrial feel of the", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts two men walking down a dimly lit, narrow hallway. The hallway appears to be part of an industrial or institutional building, as evidenced by the concrete walls and ceiling. The lighting is minimal, with a single light fixture visible on the ceiling, casting shadows on the walls and creating a somewhat eerie atmosphere.\\nThe man in the foreground is wearing a dark suit and tie, while the man behind him is dressed in a lighter-colored suit. Both men are walking in the same direction, with the man in the foreground slightly ahead. The hallway is lined with pipes and other utility conduits, adding to the industrial feel of the", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T18:55:35.488443Z", "created_at": "2025-01-24T18:53:10.873000Z", "data_removed": false, "error": null, "id": "7qws6x9nb5rm80cmk6mr5v5w0g", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts two men walking down a dimly lit, narrow hallway. The hallway appears to be part of an industrial or institutional building, as evidenced by the concrete walls and ceiling. The lighting is minimal, with a single light fixture visible on the ceiling, casting shadows on the walls and creating a somewhat eerie atmosphere.\nThe man in the foreground is wearing a dark suit and tie, while the man behind him is dressed in a lighter-colored suit. Both men are walking in the same direction, with the man in the foreground slightly ahead. The hallway is lined with pipes and other utility conduits, adding to the industrial feel of the", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_314d031b-0f34-494e-b081-f46a8164c232.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_314d031b-0f34-494e-b081-f46a8164c232.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 140\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_314d031b-0f34-494e-b081-f46a8164c232 with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.29s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.29s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.344 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.83it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.54it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.02it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.93it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.94it/s]\n[ComfyUI] Prompt executed in 140.39 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 141.865644888, "total_time": 144.615443 }, "output": "https://replicate.delivery/xezq/FBSXr68RaaYJI9KVQSEiRyxv4iGwQdwi5iQg4RmQBv9JUGCF/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T18:53:13.622798Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-2liokzcbtd7ysvt4iijmlqtou7oavcae74amqhlzqn727mew7yya", "get": "https://api.replicate.com/v1/predictions/7qws6x9nb5rm80cmk6mr5v5w0g", "cancel": "https://api.replicate.com/v1/predictions/7qws6x9nb5rm80cmk6mr5v5w0g/cancel" }, "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_314d031b-0f34-494e-b081-f46a8164c232.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_314d031b-0f34-494e-b081-f46a8164c232.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 140 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_314d031b-0f34-494e-b081-f46a8164c232 with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.29s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.29s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.344 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.83it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.54it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.02it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.93it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.94it/s] [ComfyUI] Prompt executed in 140.39 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40badID3e68mry9exrmc0cmk6wa49q8erStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- crf
- 19
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of RCN, RCN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.
- lora_url
- scheduler
- DPMSolverMultistepScheduler
- flow_shift
- 9
- frame_rate
- 16
- num_frames
- 66
- enhance_end
- 1
- enhance_start
- 0
- force_offload
- lora_strength
- 1
- enhance_double
- enhance_single
- enhance_weight
- 0.3
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.\n", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", { input: { crf: 19, seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of RCN, RCN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.\n", lora_url: "", scheduler: "DPMSolverMultistepScheduler", flow_shift: 9, frame_rate: 16, num_frames: 66, enhance_end: 1, enhance_start: 0, force_offload: true, lora_strength: 1, enhance_double: true, enhance_single: true, enhance_weight: 0.3, guidance_scale: 6, denoise_strength: 1 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", input={ "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.\n", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": True, "lora_strength": 1, "enhance_double": True, "enhance_single": True, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", "input": { "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip features a close-up of a person\'s face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person\'s facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person\'s gaze conveying a sense of determination or resolve.\\n", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T19:13:24.512476Z", "created_at": "2025-01-24T19:10:11.831000Z", "data_removed": false, "error": null, "id": "3e68mry9exrmc0cmk6wa49q8er", "input": { "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip features a close-up of a person's face, focusing on their eyes and part of their hair. The individual has a serious or contemplative expression, with their eyes looking directly at the camera. The background is blurred, with warm, orange hues that suggest a setting sun or a fiery environment. The person is wearing large, geometric earrings that add a distinctive touch to their appearance. The lighting highlights the person's facial features, particularly their eyes, which are the central focus of the shot. The overall mood of the clip is intense and focused, with the person's gaze conveying a sense of determination or resolve.\n", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_bbac0c22-5c67-4679-a1b7-5f62fde185e2.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_bbac0c22-5c67-4679-a1b7-5f62fde185e2.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\nExecuting node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.59it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.63it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.63it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.39it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.03it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 137\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_bbac0c22-5c67-4679-a1b7-5f62fde185e2 with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.43s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.07s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:42, 2.17s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.22s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.28s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:18<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.760 GB\n[ComfyUI] Max allocated memory: max_memory=15.559 GB\n[ComfyUI] Max reserved memory: max_reserved=16.875 GB\n[ComfyUI]\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.52s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.31s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.19it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.47it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.97it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.88it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 84.83it/s]\n[ComfyUI] Prompt executed in 149.98 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 156.539016993, "total_time": 192.681476 }, "output": "https://replicate.delivery/xezq/rTEAzQPKBh6yBZJNbhr3e0JSESw5XUGxdweoxB4qbN9UhZIUA/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T19:10:47.973459Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-bls2hb76ci4uatl7kghywddmzladbcd4vatfxkupktf2tsm2st4q", "get": "https://api.replicate.com/v1/predictions/3e68mry9exrmc0cmk6wa49q8er", "cancel": "https://api.replicate.com/v1/predictions/3e68mry9exrmc0cmk6wa49q8er/cancel" }, "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_bbac0c22-5c67-4679-a1b7-5f62fde185e2.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_bbac0c22-5c67-4679-a1b7-5f62fde185e2.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader Executing node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo [ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.59it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.63it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.63it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.39it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.03it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 137 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_bbac0c22-5c67-4679-a1b7-5f62fde185e2 with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.43s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.07s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:42, 2.17s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.22s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.28s/it] [ComfyUI] 16%|█▌ | 8/50 [00:18<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.760 GB [ComfyUI] Max allocated memory: max_memory=15.559 GB [ComfyUI] Max reserved memory: max_reserved=16.875 GB [ComfyUI] Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.52s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.31s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.19it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.47it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.97it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.88it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 84.83it/s] [ComfyUI] Prompt executed in 149.98 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40badID6zdcv4b6t5rma0cmn4a95x02agStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of RCN, RCN The video clip depicts A face emerging from darkness as they step into a beam of light
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1.2
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts A face emerging from darkness as they step into a beam of light", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of RCN, RCN The video clip depicts A face emerging from darkness as they step into a beam of light", frame_rate: 16, num_frames: 66, lora_strength: 1.2, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts A face emerging from darkness as they step into a beam of light", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts A face emerging from darkness as they step into a beam of light", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-27T18:51:04.835715Z", "created_at": "2025-01-27T18:44:34.129000Z", "data_removed": false, "error": null, "id": "6zdcv4b6t5rma0cmn4a95x02ag", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts A face emerging from darkness as they step into a beam of light", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_87a3b5df-2ec2-4995-812c-f35684a0fb5c.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_87a3b5df-2ec2-4995-812c-f35684a0fb5c.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\nExecuting node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.52it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.36it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:02<00:00, 1.43it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 2.11it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.79it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 29\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 28\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_87a3b5df-2ec2-4995-812c-f35684a0fb5c with strength: 1.2\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<02:00, 2.45s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.08s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:42, 2.18s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.23s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.28s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:18<01:35, 2.29s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.30s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.760 GB\n[ComfyUI] Max allocated memory: max_memory=15.559 GB\n[ComfyUI] Max reserved memory: max_reserved=16.875 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.26s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.30s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.66it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.48it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 84.74it/s]\n[ComfyUI] Prompt executed in 150.01 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 157.226384095, "total_time": 390.706715 }, "output": "https://replicate.delivery/xezq/Xfeq4dee0mQmVQsFfPxEiTQ6EMxSHnSE4hsXieifd8oMMPsEKA/HunyuanVideo_00001.mp4", "started_at": "2025-01-27T18:48:27.609330Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-abvz4vvcrwxxiqfow7u7bpa7ppkpbxqvgnlc73nslehg2d3pd37q", "get": "https://api.replicate.com/v1/predictions/6zdcv4b6t5rma0cmn4a95x02ag", "cancel": "https://api.replicate.com/v1/predictions/6zdcv4b6t5rma0cmn4a95x02ag/cancel" }, "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_87a3b5df-2ec2-4995-812c-f35684a0fb5c.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_87a3b5df-2ec2-4995-812c-f35684a0fb5c.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader Executing node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.52it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.36it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:02<00:00, 1.43it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 2.11it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.79it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 29 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 28 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_87a3b5df-2ec2-4995-812c-f35684a0fb5c with strength: 1.2 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<02:00, 2.45s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.08s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:42, 2.18s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.23s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.28s/it] [ComfyUI] 16%|█▌ | 8/50 [00:18<01:35, 2.29s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.30s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.760 GB [ComfyUI] Max allocated memory: max_memory=15.559 GB [ComfyUI] Max reserved memory: max_reserved=16.875 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.26s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.30s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.66it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.48it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 84.74it/s] [ComfyUI] Prompt executed in 150.01 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40badIDjsny1jmsqdrme0cmn4a8mfqjzrStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of RCN, RCN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1.2
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of RCN, RCN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", frame_rate: 16, num_frames: 66, lora_strength: 1.2, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts A detective\'s weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-27T18:53:37.514665Z", "created_at": "2025-01-27T18:44:47.163000Z", "data_removed": false, "error": null, "id": "jsny1jmsqdrme0cmn4a8mfqjzr", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts A detective's weathered face emerging from darkness as they step into a single shaft of light, cigarette smoke curling upward", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_8bce0fda-9d04-4a82-b042-95479ea8bc5f.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_8bce0fda-9d04-4a82-b042-95479ea8bc5f.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 40\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 38\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_8bce0fda-9d04-4a82-b042-95479ea8bc5f with strength: 1.2\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.15s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.47s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.78it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.54it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.03it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.94it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.28it/s]\n[ComfyUI] Prompt executed in 144.05 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 152.580079727, "total_time": 530.351665 }, "output": "https://replicate.delivery/xezq/PIMRbX3deBQ3NqDeBCGflOsyvk6g3Ld0PVh6jjtawJZiBxSoA/HunyuanVideo_00001.mp4", "started_at": "2025-01-27T18:51:04.934585Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-tjqd3e34wklkfc5jwvz7353epjisk7cyxyuur4bsjfobjknsemuq", "get": "https://api.replicate.com/v1/predictions/jsny1jmsqdrme0cmn4a8mfqjzr", "cancel": "https://api.replicate.com/v1/predictions/jsny1jmsqdrme0cmn4a8mfqjzr/cancel" }, "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_8bce0fda-9d04-4a82-b042-95479ea8bc5f.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_8bce0fda-9d04-4a82-b042-95479ea8bc5f.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 40 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 38 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_8bce0fda-9d04-4a82-b042-95479ea8bc5f with strength: 1.2 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.15s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.47s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.78it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.54it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.03it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.94it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.28it/s] [ComfyUI] Prompt executed in 144.05 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40badIDp0zrhqf0r1rma0cmn4qrb6h9n0StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of RCN, RCN The video clip depicts A warrior woman with long dark hair standing on a cliff edge, her black cloak billowing in the storm wind
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1.2
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts A warrior woman with long dark hair standing on a cliff edge, her black cloak billowing in the storm wind", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of RCN, RCN The video clip depicts A warrior woman with long dark hair standing on a cliff edge, her black cloak billowing in the storm wind", frame_rate: 16, num_frames: 66, lora_strength: 1.2, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts A warrior woman with long dark hair standing on a cliff edge, her black cloak billowing in the storm wind", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts A warrior woman with long dark hair standing on a cliff edge, her black cloak billowing in the storm wind", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-27T19:16:57.771660Z", "created_at": "2025-01-27T19:14:34.816000Z", "data_removed": false, "error": null, "id": "p0zrhqf0r1rma0cmn4qrb6h9n0", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts A warrior woman with long dark hair standing on a cliff edge, her black cloak billowing in the storm wind", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_b29687e1-25d3-4782-a0e8-64bab29715af.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_b29687e1-25d3-4782-a0e8-64bab29715af.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 38\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 37\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_b29687e1-25d3-4782-a0e8-64bab29715af with strength: 1.2\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:53, 2.31s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:37, 2.03s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.16s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.344 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.47s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.86it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.54it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.02it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.94it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.68it/s]\n[ComfyUI] Prompt executed in 140.53 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 142.892602669, "total_time": 142.95566 }, "output": "https://replicate.delivery/xezq/s9RePs0fV9p4YEQBsqxhyGv6Zn5myJ8NW2SXnuh8Zf3TtxSoA/HunyuanVideo_00001.mp4", "started_at": "2025-01-27T19:14:34.879058Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-s3veb7ytw5tp45qywqzrfjigvi2qd4mogzfowkcwq4djs7ncwqvq", "get": "https://api.replicate.com/v1/predictions/p0zrhqf0r1rma0cmn4qrb6h9n0", "cancel": "https://api.replicate.com/v1/predictions/p0zrhqf0r1rma0cmn4qrb6h9n0/cancel" }, "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_b29687e1-25d3-4782-a0e8-64bab29715af.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_b29687e1-25d3-4782-a0e8-64bab29715af.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 38 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 37 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_b29687e1-25d3-4782-a0e8-64bab29715af with strength: 1.2 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Input (height, width, video_length) = (368, 640, 65) [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])[ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:53, 2.31s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:37, 2.03s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.16s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.21s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.344 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.47s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.86it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.54it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.02it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.94it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.68it/s] [ComfyUI] Prompt executed in 140.53 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40badIDmwtt0c5r95rma0cmn4mvnadj74StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of RCN, RCN The video clip depicts Close up of a bride's face as her smile fades to shock, pearls glinting around her neck
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1.2
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts Close up of a bride's face as her smile fades to shock, pearls glinting around her neck", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of RCN, RCN The video clip depicts Close up of a bride's face as her smile fades to shock, pearls glinting around her neck", frame_rate: 16, num_frames: 66, lora_strength: 1.2, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-arcane:d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts Close up of a bride's face as her smile fades to shock, pearls glinting around her neck", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-arcane using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts Close up of a bride\'s face as her smile fades to shock, pearls glinting around her neck", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-27T19:10:17.940789Z", "created_at": "2025-01-27T19:07:51.241000Z", "data_removed": false, "error": null, "id": "mwtt0c5r95rma0cmn4mvnadj74", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of RCN, RCN The video clip depicts Close up of a bride's face as her smile fades to shock, pearls glinting around her neck", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_ee828672-cd85-4597-93b0-7a8ea9af8166.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_ee828672-cd85-4597-93b0-7a8ea9af8166.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 37\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 36\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_ee828672-cd85-4597-93b0-7a8ea9af8166 with strength: 1.2\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.47s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.28it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.53it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.00it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.92it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.45it/s]\n[ComfyUI] Prompt executed in 142.67 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 146.691717152, "total_time": 146.699789 }, "output": "https://replicate.delivery/xezq/ewu6DyYKSrzuBK396WZonrdRyIO1TNA8sh19fWPANlXZwYJUA/HunyuanVideo_00001.mp4", "started_at": "2025-01-27T19:07:51.249072Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-tyrrsew4r6z2ivofq42dfxffcm7rp3qotiaefu22mj5xcx5a6kbq", "get": "https://api.replicate.com/v1/predictions/mwtt0c5r95rma0cmn4mvnadj74", "cancel": "https://api.replicate.com/v1/predictions/mwtt0c5r95rma0cmn4mvnadj74/cancel" }, "version": "d294d8b37fd60ff1499e631d054250ae51709fe87e8e32d563dd98c610a40bad" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_ee828672-cd85-4597-93b0-7a8ea9af8166.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_ee828672-cd85-4597-93b0-7a8ea9af8166.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 37 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 36 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_ee828672-cd85-4597-93b0-7a8ea9af8166 with strength: 1.2 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.02s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.30s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.47s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.28it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.53it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.00it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.92it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.45it/s] [ComfyUI] Prompt executed in 142.67 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Want to make some of these yourself?
Run this model