deepfates / hunyuan-indiana-jones
Hunyuan-Video model finetuned on Indiana Jones Series (1981). Trigger word is "NDNJN". Use "A video in the style of NDNJN, NDNJN" at the beginning of your prompt for best results.
- Public
- 98 runs
- Fine-tune
Prediction
deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526IDrvch84e1b1rma0cmjpsatamkn4StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- crf
- 19
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NDNJN, NDNJN The video clip depicts a detailed portrait of a woman's face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire. The woman's expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on
- lora_url
- scheduler
- DPMSolverMultistepScheduler
- flow_shift
- 9
- frame_rate
- 16
- num_frames
- 66
- enhance_end
- 1
- enhance_start
- 0
- force_offload
- lora_strength
- 1
- enhance_double
- enhance_single
- enhance_weight
- 0.3
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip depicts a detailed portrait of a woman's face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire.\nThe woman's expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", { input: { crf: 19, seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NDNJN, NDNJN The video clip depicts a detailed portrait of a woman's face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire.\nThe woman's expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on", lora_url: "", scheduler: "DPMSolverMultistepScheduler", flow_shift: 9, frame_rate: 16, num_frames: 66, enhance_end: 1, enhance_start: 0, force_offload: true, lora_strength: 1, enhance_double: true, enhance_single: true, enhance_weight: 0.3, guidance_scale: 6, denoise_strength: 1 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", input={ "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip depicts a detailed portrait of a woman's face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire.\nThe woman's expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": True, "lora_strength": 1, "enhance_double": True, "enhance_single": True, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", "input": { "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip depicts a detailed portrait of a woman\'s face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire.\\nThe woman\'s expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T00:29:34.296340Z", "created_at": "2025-01-24T00:25:07.672000Z", "data_removed": false, "error": null, "id": "rvch84e1b1rma0cmjpsatamkn4", "input": { "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip depicts a detailed portrait of a woman's face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire.\nThe woman's expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 148\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.23s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.25s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.26s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.28s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:26, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.29s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:05<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.28s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 26.04it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.00it/s]\n[ComfyUI] Prompt executed in 130.97 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 136.152923427, "total_time": 266.62434 }, "output": "https://replicate.delivery/xezq/08N33VU8OdpEAZ0hMjRk4ZzZxLvrLrN6rdseX3T4baN3hEEKA/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T00:27:18.143416Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-lsuwtb6d2nuaqfxwic5hrpmz3bbmplkvpdozopg2fzszqf3zu2oq", "get": "https://api.replicate.com/v1/predictions/rvch84e1b1rma0cmjpsatamkn4", "cancel": "https://api.replicate.com/v1/predictions/rvch84e1b1rma0cmjpsatamkn4/cancel" }, "version": "f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 148 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 [ComfyUI] Input (height, width, video_length) = (368, 640, 65) [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.23s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.25s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.26s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.28s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:26, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it] [ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.29s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:05<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.28s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 26.04it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.04it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.95it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 65.00it/s] [ComfyUI] Prompt executed in 130.97 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526IDj1h6cf54kdrmc0cmjq1bpxcm1wStatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- crf
- 19
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video of NDNJN, a wizard in a blue-gray robe, his face shadowed inside a hood, reads a book in a cyberpunk hacker's room with computers all over the place. He suddenly looks up and directly at the camera with glowing blue eyes.
- lora_url
- scheduler
- DPMSolverMultistepScheduler
- flow_shift
- 9
- frame_rate
- 16
- num_frames
- 66
- enhance_end
- 1
- enhance_start
- 0
- force_offload
- lora_strength
- 1
- enhance_double
- enhance_single
- enhance_weight
- 0.3
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video of NDNJN, a wizard in a blue-gray robe, his face shadowed inside a hood, reads a book in a cyberpunk hacker's room with computers all over the place. He suddenly looks up and directly at the camera with glowing blue eyes.", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", { input: { crf: 19, seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video of NDNJN, a wizard in a blue-gray robe, his face shadowed inside a hood, reads a book in a cyberpunk hacker's room with computers all over the place. He suddenly looks up and directly at the camera with glowing blue eyes.", lora_url: "", scheduler: "DPMSolverMultistepScheduler", flow_shift: 9, frame_rate: 16, num_frames: 66, enhance_end: 1, enhance_start: 0, force_offload: true, lora_strength: 1, enhance_double: true, enhance_single: true, enhance_weight: 0.3, guidance_scale: 6, denoise_strength: 1 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", input={ "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video of NDNJN, a wizard in a blue-gray robe, his face shadowed inside a hood, reads a book in a cyberpunk hacker's room with computers all over the place. He suddenly looks up and directly at the camera with glowing blue eyes.", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": True, "lora_strength": 1, "enhance_double": True, "enhance_single": True, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", "input": { "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video of NDNJN, a wizard in a blue-gray robe, his face shadowed inside a hood, reads a book in a cyberpunk hacker\'s room with computers all over the place. He suddenly looks up and directly at the camera with glowing blue eyes.", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T00:45:59.789920Z", "created_at": "2025-01-24T00:42:28.891000Z", "data_removed": false, "error": null, "id": "j1h6cf54kdrmc0cmjq1bpxcm1w", "input": { "crf": 19, "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video of NDNJN, a wizard in a blue-gray robe, his face shadowed inside a hood, reads a book in a cyberpunk hacker's room with computers all over the place. He suddenly looks up and directly at the camera with glowing blue eyes.", "lora_url": "", "scheduler": "DPMSolverMultistepScheduler", "flow_shift": 9, "frame_rate": 16, "num_frames": 66, "enhance_end": 1, "enhance_start": 0, "force_offload": true, "lora_strength": 1, "enhance_double": true, "enhance_single": true, "enhance_weight": 0.3, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\nExecuting node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:02, 1.31it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.50it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.56it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 2.32it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.92it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 56\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 57\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: lora_comfyui with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:59, 2.43s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.07s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:42, 2.18s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.23s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.26s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:38, 2.28s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:18<01:36, 2.29s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.30s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.30s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:25<01:29, 2.30s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.31s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:16, 2.31s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.31s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.31s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.31s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:48<01:06, 2.31s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.31s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.31s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.31s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.31s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.31s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:53, 2.31s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.31s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.31s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.31s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:11<00:43, 2.31s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.31s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.31s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:18<00:36, 2.31s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.31s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.31s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.31s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.31s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.31s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.31s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:34<00:20, 2.31s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.31s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.31s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:41<00:13, 2.31s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.31s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.31s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:48<00:06, 2.31s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.31s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.31s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.31s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] Allocated memory: memory=12.760 GB\n[ComfyUI] Max allocated memory: max_memory=15.559 GB\n[ComfyUI] Max reserved memory: max_reserved=16.875 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.53s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.32s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.31it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.46it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.96it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.87it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 82.39it/s]\n[ComfyUI] Prompt executed in 151.95 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 155.430221155, "total_time": 210.89892 }, "output": "https://replicate.delivery/xezq/MkkozaoGZPpfeEoKbYCnfXSktlrGhwRJ6fXTMuUjaJ1cMlgQB/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T00:43:24.359699Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-g2qktjm3il7y7iwnko3vwqe362rnon2phfowvrnsffpu453eh4ia", "get": "https://api.replicate.com/v1/predictions/j1h6cf54kdrmc0cmjq1bpxcm1w", "cancel": "https://api.replicate.com/v1/predictions/j1h6cf54kdrmc0cmjq1bpxcm1w/cancel" }, "version": "f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader [ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 Executing node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:02, 1.31it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.50it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.56it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 2.32it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.92it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 56 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 57 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: lora_comfyui with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:59, 2.43s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.07s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:42, 2.18s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.23s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.26s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:38, 2.28s/it] [ComfyUI] 16%|█▌ | 8/50 [00:18<01:36, 2.29s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.30s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.30s/it] [ComfyUI] 22%|██▏ | 11/50 [00:25<01:29, 2.30s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.31s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:16, 2.31s/it] [ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.31s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.31s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.31s/it] [ComfyUI] 42%|████▏ | 21/50 [00:48<01:06, 2.31s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.31s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.31s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.31s/it] [ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.31s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.31s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:53, 2.31s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.31s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.31s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.31s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:11<00:43, 2.31s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.31s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.31s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:18<00:36, 2.31s/it] [ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.31s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.31s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.31s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.31s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.31s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.31s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:34<00:20, 2.31s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.31s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.31s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:41<00:13, 2.31s/it] [ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.31s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.31s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:48<00:06, 2.31s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.31s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.31s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.31s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] Allocated memory: memory=12.760 GB [ComfyUI] Max allocated memory: max_memory=15.559 GB [ComfyUI] Max reserved memory: max_reserved=16.875 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.53s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.32s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.31it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.46it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.96it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.87it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 82.39it/s] [ComfyUI] Prompt executed in 151.95 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526IDfy1jtt7xnhrma0cmjrx9pz7mq4StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NDNJN, NDNJN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NDNJN, NDNJN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T03:10:20.557099Z", "created_at": "2025-01-24T02:53:56.012000Z", "data_removed": false, "error": null, "id": "fy1jtt7xnhrma0cmjrx9pz7mq4", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip shows a man smoking a cigarette in a dimly light room. The ember glows as he inhales and then he sighs a cloud of smoke. The camera pans left to follow the smoke as it blows out the window toward the moonlit sea", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.312 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.95it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.55it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.03it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.94it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.47it/s]\n[ComfyUI] Prompt executed in 123.32 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 129.310540894, "total_time": 984.545099 }, "output": "https://replicate.delivery/xezq/tVcPjDUFrqLxMhK8WBW3t5pmAtaDOXECjxdxk8qjFPGn2CCF/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T03:08:11.246558Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-qbey36q4jktr7ujzxidj7mo77sn5kqyrin6ftap6xq2thpnl5f3a", "get": "https://api.replicate.com/v1/predictions/fy1jtt7xnhrma0cmjrx9pz7mq4", "cancel": "https://api.replicate.com/v1/predictions/fy1jtt7xnhrma0cmjrx9pz7mq4/cancel" }, "version": "f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt [ComfyUI] Input (height, width, video_length) = (368, 640, 65) [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.312 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.95it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.55it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.03it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.94it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.47it/s] [ComfyUI] Prompt executed in 123.32 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526ID8gd732b3p5rmc0cmjs3tzh5g70StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NDNJN, NDNJN The video clip depicts a horse-drawn carriage traveling along a foggy, winding road. The road is lined with trees on both sides, creating a dense and mysterious atmosphere. The carriage is a traditional style with four large wheels and a roof, pulled by a single horse. The horse is dark in color, possibly black or dark brown, and seems to be moving at a leisurely pace. In the foreground, the road is clearly visible, with rocks and vegetation along the sides. The fog is thick, limiting visibility and adding to the sense of isolation and mystery. The trees are covered in leaves, suggesting a late spring or summer setting. In the distance
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip depicts a horse-drawn carriage traveling along a foggy, winding road. The road is lined with trees on both sides, creating a dense and mysterious atmosphere. The carriage is a traditional style with four large wheels and a roof, pulled by a single horse. The horse is dark in color, possibly black or dark brown, and seems to be moving at a leisurely pace.\nIn the foreground, the road is clearly visible, with rocks and vegetation along the sides. The fog is thick, limiting visibility and adding to the sense of isolation and mystery. The trees are covered in leaves, suggesting a late spring or summer setting.\nIn the distance", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NDNJN, NDNJN The video clip depicts a horse-drawn carriage traveling along a foggy, winding road. The road is lined with trees on both sides, creating a dense and mysterious atmosphere. The carriage is a traditional style with four large wheels and a roof, pulled by a single horse. The horse is dark in color, possibly black or dark brown, and seems to be moving at a leisurely pace.\nIn the foreground, the road is clearly visible, with rocks and vegetation along the sides. The fog is thick, limiting visibility and adding to the sense of isolation and mystery. The trees are covered in leaves, suggesting a late spring or summer setting.\nIn the distance", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip depicts a horse-drawn carriage traveling along a foggy, winding road. The road is lined with trees on both sides, creating a dense and mysterious atmosphere. The carriage is a traditional style with four large wheels and a roof, pulled by a single horse. The horse is dark in color, possibly black or dark brown, and seems to be moving at a leisurely pace.\nIn the foreground, the road is clearly visible, with rocks and vegetation along the sides. The fog is thick, limiting visibility and adding to the sense of isolation and mystery. The trees are covered in leaves, suggesting a late spring or summer setting.\nIn the distance", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip depicts a horse-drawn carriage traveling along a foggy, winding road. The road is lined with trees on both sides, creating a dense and mysterious atmosphere. The carriage is a traditional style with four large wheels and a roof, pulled by a single horse. The horse is dark in color, possibly black or dark brown, and seems to be moving at a leisurely pace.\\nIn the foreground, the road is clearly visible, with rocks and vegetation along the sides. The fog is thick, limiting visibility and adding to the sense of isolation and mystery. The trees are covered in leaves, suggesting a late spring or summer setting.\\nIn the distance", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T03:19:13.660607Z", "created_at": "2025-01-24T03:07:28.561000Z", "data_removed": false, "error": null, "id": "8gd732b3p5rmc0cmjs3tzh5g70", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip depicts a horse-drawn carriage traveling along a foggy, winding road. The road is lined with trees on both sides, creating a dense and mysterious atmosphere. The carriage is a traditional style with four large wheels and a roof, pulled by a single horse. The horse is dark in color, possibly black or dark brown, and seems to be moving at a leisurely pace.\nIn the foreground, the road is clearly visible, with rocks and vegetation along the sides. The fog is thick, limiting visibility and adding to the sense of isolation and mystery. The trees are covered in leaves, suggesting a late spring or summer setting.\nIn the distance", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 148\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.312 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.47s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.92it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.55it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.03it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.94it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.87it/s]\n[ComfyUI] Prompt executed in 131.39 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 133.074935467, "total_time": 705.099607 }, "output": "https://replicate.delivery/xezq/Xpzl1MaqAVKtOVMqjmQhTc1f3sihOI2C1KK4q8s9YH6YxFEKA/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T03:17:00.585671Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-dobnqit4r5pv7dabvi4ba5tho5giujqt2v24254ujdcssaods5sq", "get": "https://api.replicate.com/v1/predictions/8gd732b3p5rmc0cmjs3tzh5g70", "cancel": "https://api.replicate.com/v1/predictions/8gd732b3p5rmc0cmjs3tzh5g70/cancel" }, "version": "f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 148 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 [ComfyUI] Input (height, width, video_length) = (368, 640, 65) [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.29s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.312 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.47s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.92it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.55it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.03it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.94it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.87it/s] [ComfyUI] Prompt executed in 131.39 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526IDwsm9egpj8drma0cmk5x95sh7m0StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NDNJN, NDNJN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NDNJN, NDNJN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T18:11:09.210718Z", "created_at": "2025-01-24T18:02:30.851000Z", "data_removed": false, "error": null, "id": "wsm9egpj8drma0cmk5x95sh7m0", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_84783c98-f013-4713-aeb9-33264694c86a.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_84783c98-f013-4713-aeb9-33264694c86a.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 142\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_84783c98-f013-4713-aeb9-33264694c86a with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.24s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.47s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.91it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.55it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.02it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.94it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.98it/s]\n[ComfyUI] Prompt executed in 140.05 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 145.929401128, "total_time": 518.359718 }, "output": "https://replicate.delivery/xezq/JAsZsLfdpi1cCygTmVAKcyPH2emaKNmhyRUeCJzuAffu3EDhC/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T18:08:43.281317Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-wbriqmw5nphsma5tdovo5nq7cawzkk2n5xjwlz56dpiw3yth42vq", "get": "https://api.replicate.com/v1/predictions/wsm9egpj8drma0cmk5x95sh7m0", "cancel": "https://api.replicate.com/v1/predictions/wsm9egpj8drma0cmk5x95sh7m0/cancel" }, "version": "f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_84783c98-f013-4713-aeb9-33264694c86a.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_84783c98-f013-4713-aeb9-33264694c86a.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 142 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_84783c98-f013-4713-aeb9-33264694c86a with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.30s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.24s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.30s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.30s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.30s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.30s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.30s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.30s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.30s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.30s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.30s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.30s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.30s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.30s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.30s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.30s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.30s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.30s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.30s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.30s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.30s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.30s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.30s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.30s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.47s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.28s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 25.91it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.55it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.02it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.94it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.98it/s] [ComfyUI] Prompt executed in 140.05 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526IDnjtramfd39rme0cmk5xvf1cak4StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NDNJN, NDNJN The video clip features a woman standing outdoors in what appears to be a historical or rural setting. The lighting is warm and golden, suggesting that it is either early morning or late afternoon. The woman has long, wavy hair that is partially tied back, and she is wearing a blue top with a lace or embroidered neckline. The background is slightly blurred, but it shows wooden structures and a dirt path, indicating a rustic environment. The overall atmosphere is serene and contemplative.
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip features a woman standing outdoors in what appears to be a historical or rural setting. The lighting is warm and golden, suggesting that it is either early morning or late afternoon. The woman has long, wavy hair that is partially tied back, and she is wearing a blue top with a lace or embroidered neckline. The background is slightly blurred, but it shows wooden structures and a dirt path, indicating a rustic environment. The overall atmosphere is serene and contemplative.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NDNJN, NDNJN The video clip features a woman standing outdoors in what appears to be a historical or rural setting. The lighting is warm and golden, suggesting that it is either early morning or late afternoon. The woman has long, wavy hair that is partially tied back, and she is wearing a blue top with a lace or embroidered neckline. The background is slightly blurred, but it shows wooden structures and a dirt path, indicating a rustic environment. The overall atmosphere is serene and contemplative.", frame_rate: 16, num_frames: 66, lora_strength: 1, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip features a woman standing outdoors in what appears to be a historical or rural setting. The lighting is warm and golden, suggesting that it is either early morning or late afternoon. The woman has long, wavy hair that is partially tied back, and she is wearing a blue top with a lace or embroidered neckline. The background is slightly blurred, but it shows wooden structures and a dirt path, indicating a rustic environment. The overall atmosphere is serene and contemplative.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip features a woman standing outdoors in what appears to be a historical or rural setting. The lighting is warm and golden, suggesting that it is either early morning or late afternoon. The woman has long, wavy hair that is partially tied back, and she is wearing a blue top with a lace or embroidered neckline. The background is slightly blurred, but it shows wooden structures and a dirt path, indicating a rustic environment. The overall atmosphere is serene and contemplative.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-24T18:12:06.388128Z", "created_at": "2025-01-24T18:03:43.258000Z", "data_removed": false, "error": null, "id": "njtramfd39rme0cmk5xvf1cak4", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip features a woman standing outdoors in what appears to be a historical or rural setting. The lighting is warm and golden, suggesting that it is either early morning or late afternoon. The woman has long, wavy hair that is partially tied back, and she is wearing a blue top with a lace or embroidered neckline. The background is slightly blurred, but it shows wooden structures and a dirt path, indicating a rustic environment. The overall atmosphere is serene and contemplative.", "frame_rate": 16, "num_frames": 66, "lora_strength": 1, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_f5962bf2-b3d9-4bdb-8fa0-b83ccef72d16.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_f5962bf2-b3d9-4bdb-8fa0-b83ccef72d16.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 111\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_f5962bf2-b3d9-4bdb-8fa0-b83ccef72d16 with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.29s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it]\n[ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.23s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.25s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:26, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.29s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.28s/it]\n[ComfyUI] Allocated memory: memory=12.300 GB\n[ComfyUI] Max allocated memory: max_memory=15.099 GB\n[ComfyUI] Max reserved memory: max_reserved=16.281 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 26.07it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.05it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.96it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.92it/s]\n[ComfyUI] Prompt executed in 139.41 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 148.622353831, "total_time": 503.130128 }, "output": "https://replicate.delivery/xezq/pkO4wjqfxiXtWqDIcJXsIWm52RKscUVJDBf65URiOF52nYIUA/HunyuanVideo_00001.mp4", "started_at": "2025-01-24T18:09:37.765774Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-igosf2ompxu2vtidjjojsn7ieibjrghdlgfym7nnovc27khkhzhq", "get": "https://api.replicate.com/v1/predictions/njtramfd39rme0cmk5xvf1cak4", "cancel": "https://api.replicate.com/v1/predictions/njtramfd39rme0cmk5xvf1cak4/cancel" }, "version": "f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_f5962bf2-b3d9-4bdb-8fa0-b83ccef72d16.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_f5962bf2-b3d9-4bdb-8fa0-b83ccef72d16.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 111 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_f5962bf2-b3d9-4bdb-8fa0-b83ccef72d16 with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])[ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:52, 2.29s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:36, 2.01s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:40, 2.14s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:41, 2.20s/it] [ComfyUI] 10%|█ | 5/50 [00:10<01:40, 2.23s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.25s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:17<01:35, 2.27s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.28s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:26, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:33<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it] [ComfyUI] 44%|████▍ | 22/50 [00:49<01:04, 2.29s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.29s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:12<00:41, 2.29s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:28<00:25, 2.29s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:44<00:09, 2.29s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:51<00:02, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.28s/it] [ComfyUI] Allocated memory: memory=12.300 GB [ComfyUI] Max allocated memory: max_memory=15.099 GB [ComfyUI] Max reserved memory: max_reserved=16.281 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.46s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.24s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.27s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 26.07it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.56it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.05it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.96it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 64.92it/s] [ComfyUI] Prompt executed in 139.41 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Prediction
deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526IDrsrx8xb615rme0cmn4abakxj74StatusSucceededSourceAPIHardwareH100Total durationCreatedInput
- seed
- 12345
- steps
- 50
- width
- 640
- height
- 360
- prompt
- A video in the style of NDNJN, NDNJN The video clip depicts A face emerging from darkness as they step into a beam of light
- frame_rate
- 16
- num_frames
- 66
- lora_strength
- 1.2
- guidance_scale
- 6
{ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip depicts A face emerging from darkness as they step into a beam of light", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", { input: { seed: 12345, steps: 50, width: 640, height: 360, prompt: "A video in the style of NDNJN, NDNJN The video clip depicts A face emerging from darkness as they step into a beam of light", frame_rate: 16, num_frames: 66, lora_strength: 1.2, guidance_scale: 6 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", input={ "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip depicts A face emerging from darkness as they step into a beam of light", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run deepfates/hunyuan-indiana-jones using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "deepfates/hunyuan-indiana-jones:f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip depicts A face emerging from darkness as they step into a beam of light", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-27T18:51:07.593276Z", "created_at": "2025-01-27T18:44:33.929000Z", "data_removed": false, "error": null, "id": "rsrx8xb615rme0cmn4abakxj74", "input": { "seed": 12345, "steps": 50, "width": 640, "height": 360, "prompt": "A video in the style of NDNJN, NDNJN The video clip depicts A face emerging from darkness as they step into a beam of light", "frame_rate": 16, "num_frames": 66, "lora_strength": 1.2, "guidance_scale": 6 }, "logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\n�� USING REPLICATE WEIGHTS (preferred method)\n🎯 USING REPLICATE WEIGHTS TAR FILE 🎯\n----------------------------------------\n📦 Processing replicate weights tar file...\n🔄 Will rename LoRA to: replicate_51066d0e-6da6-4532-9357-a7b1a57f91c4.safetensors\n📂 Extracting tar contents...\n✅ Found lora_comfyui.safetensors in tar\n✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_51066d0e-6da6-4532-9357-a7b1a57f91c4.safetensors\n----------------------------------------\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\nExecuting node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.60it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.67it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.68it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.45it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.09it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 33\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 32\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: replicate_51066d0e-6da6-4532-9357-a7b1a57f91c4 with strength: 1.2\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.42s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.07s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.17s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.22s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:18<01:35, 2.28s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.29s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.29s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.29s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.29s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.29s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.29s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it]\n[ComfyUI] Allocated memory: memory=12.760 GB\n[ComfyUI] Max allocated memory: max_memory=15.559 GB\n[ComfyUI] Max reserved memory: max_reserved=16.875 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.79it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.49it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.00it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.91it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 86.09it/s]\n[ComfyUI] Prompt executed in 147.70 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 160.819897982, "total_time": 393.664276 }, "output": "https://replicate.delivery/xezq/P4kaDRnknU7XGtAlRHpw6oWgJCttWZeYdWEWImlb8w7NPsEKA/HunyuanVideo_00001.mp4", "started_at": "2025-01-27T18:48:26.773378Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bsvm-hopftykzjkbolnsd4oos3d2gu4fcoojwvcouikhyuvil2u5i4l4a", "get": "https://api.replicate.com/v1/predictions/rsrx8xb615rme0cmn4abakxj74", "cancel": "https://api.replicate.com/v1/predictions/rsrx8xb615rme0cmn4abakxj74/cancel" }, "version": "f4a9ac08b2f70053e70e8b7777cf0b47e365c995b5865d22a04432aebc8ad526" }
Generated inSeed set to: 12345 ⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements ⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements �� USING REPLICATE WEIGHTS (preferred method) 🎯 USING REPLICATE WEIGHTS TAR FILE 🎯 ---------------------------------------- 📦 Processing replicate weights tar file... 🔄 Will rename LoRA to: replicate_51066d0e-6da6-4532-9357-a7b1a57f91c4.safetensors 📂 Extracting tar contents... ✅ Found lora_comfyui.safetensors in tar ✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_51066d0e-6da6-4532-9357-a7b1a57f91c4.safetensors ---------------------------------------- Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader Executing node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.60it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.67it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.68it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.45it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.09it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 33 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 32 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: replicate_51066d0e-6da6-4532-9357-a7b1a57f91c4 with strength: 1.2 [ComfyUI] Requested to load HyVideoModel [ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (368, 640, 65) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. [ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) [ComfyUI] [ComfyUI] 0%| | 0/50 [00:00<?, ?it/s] [ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.42s/it] [ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.07s/it] [ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.17s/it] [ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.22s/it] [ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it] [ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.26s/it] [ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.27s/it] [ComfyUI] 16%|█▌ | 8/50 [00:18<01:35, 2.28s/it] [ComfyUI] 18%|█▊ | 9/50 [00:20<01:33, 2.28s/it] [ComfyUI] 20%|██ | 10/50 [00:22<01:31, 2.29s/it] [ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.29s/it] [ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.29s/it] [ComfyUI] 26%|██▌ | 13/50 [00:29<01:24, 2.29s/it] [ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.29s/it] [ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.29s/it] [ComfyUI] 32%|███▏ | 16/50 [00:36<01:17, 2.29s/it] [ComfyUI] 34%|███▍ | 17/50 [00:38<01:15, 2.29s/it] [ComfyUI] 36%|███▌ | 18/50 [00:40<01:13, 2.29s/it] [ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.29s/it] [ComfyUI] 40%|████ | 20/50 [00:45<01:08, 2.29s/it] [ComfyUI] 42%|████▏ | 21/50 [00:47<01:06, 2.29s/it] [ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.29s/it] [ComfyUI] 46%|████▌ | 23/50 [00:52<01:01, 2.29s/it] [ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.29s/it] [ComfyUI] 50%|█████ | 25/50 [00:56<00:57, 2.29s/it] [ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.29s/it] [ComfyUI] 54%|█████▍ | 27/50 [01:01<00:52, 2.29s/it] [ComfyUI] 56%|█████▌ | 28/50 [01:03<00:50, 2.29s/it] [ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.29s/it] [ComfyUI] 60%|██████ | 30/50 [01:08<00:45, 2.29s/it] [ComfyUI] 62%|██████▏ | 31/50 [01:10<00:43, 2.29s/it] [ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.29s/it] [ComfyUI] 66%|██████▌ | 33/50 [01:15<00:38, 2.29s/it] [ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.29s/it] [ComfyUI] 70%|███████ | 35/50 [01:19<00:34, 2.29s/it] [ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.29s/it] [ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.29s/it] [ComfyUI] 76%|███████▌ | 38/50 [01:26<00:27, 2.29s/it] [ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.29s/it] [ComfyUI] 80%|████████ | 40/50 [01:31<00:22, 2.29s/it] [ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20, 2.29s/it] [ComfyUI] 84%|████████▍ | 42/50 [01:35<00:18, 2.29s/it] [ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.29s/it] [ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13, 2.29s/it] [ComfyUI] 90%|█████████ | 45/50 [01:42<00:11, 2.29s/it] [ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.29s/it] [ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.29s/it] [ComfyUI] 96%|█████████▌| 48/50 [01:49<00:04, 2.29s/it] [ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.29s/it] [ComfyUI] Allocated memory: memory=12.760 GB [ComfyUI] Max allocated memory: max_memory=15.559 GB [ComfyUI] Max reserved memory: max_reserved=16.875 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.79it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.49it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 3.00it/s] [ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.91it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 86.09it/s] [ComfyUI] Prompt executed in 147.70 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Want to make some of these yourself?
Run this model