Readme
This model doesn't have a readme.
Hunyuan-Video model finetuned on Her (2013). Trigger word is "HR". Use "A video in the style of HR, HR" at the beginning of your prompt for best results.
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run deepfates/hunyuan-her using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"deepfates/hunyuan-her:a4289ea2d95ec71b94fcbf243d47e406b473b93e5bb700c5ec8966730d8ca63d",
{
input: {
crf: 19,
seed: 12345,
steps: 50,
width: 640,
height: 360,
prompt: "A video in the style of HR, HR The video clip depicts a beach scene with several people enjoying their time. In the foreground, a man with curly hair and a mustache is wearing glasses and a red and white checkered shirt. He is sitting on a beach chair and appears to be laughing or smiling, looking off to the side. In the background, there are other people sitting on the beach, some under umbrellas, and others lying on towels. The beach is populated with various beachgoers, and the atmosphere seems relaxed and leisurely. The overall scene conveys a sense of a typical day at the beach with people engaging in typical beach activities.",
lora_url: "",
scheduler: "DPMSolverMultistepScheduler",
flow_shift: 9,
frame_rate: 16,
num_frames: 66,
enhance_end: 1,
enhance_start: 0,
force_offload: true,
lora_strength: 1,
enhance_double: true,
enhance_single: true,
enhance_weight: 0.3,
guidance_scale: 6,
denoise_strength: 1
}
}
);
// To access the file URL:
console.log(output.url()); //=> "http://example.com"
// To write the file to disk:
fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run deepfates/hunyuan-her using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"deepfates/hunyuan-her:a4289ea2d95ec71b94fcbf243d47e406b473b93e5bb700c5ec8966730d8ca63d",
input={
"crf": 19,
"seed": 12345,
"steps": 50,
"width": 640,
"height": 360,
"prompt": "A video in the style of HR, HR The video clip depicts a beach scene with several people enjoying their time. In the foreground, a man with curly hair and a mustache is wearing glasses and a red and white checkered shirt. He is sitting on a beach chair and appears to be laughing or smiling, looking off to the side. In the background, there are other people sitting on the beach, some under umbrellas, and others lying on towels. The beach is populated with various beachgoers, and the atmosphere seems relaxed and leisurely. The overall scene conveys a sense of a typical day at the beach with people engaging in typical beach activities.",
"lora_url": "",
"scheduler": "DPMSolverMultistepScheduler",
"flow_shift": 9,
"frame_rate": 16,
"num_frames": 66,
"enhance_end": 1,
"enhance_start": 0,
"force_offload": True,
"lora_strength": 1,
"enhance_double": True,
"enhance_single": True,
"enhance_weight": 0.3,
"guidance_scale": 6,
"denoise_strength": 1
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run deepfates/hunyuan-her using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "deepfates/hunyuan-her:a4289ea2d95ec71b94fcbf243d47e406b473b93e5bb700c5ec8966730d8ca63d",
"input": {
"crf": 19,
"seed": 12345,
"steps": 50,
"width": 640,
"height": 360,
"prompt": "A video in the style of HR, HR The video clip depicts a beach scene with several people enjoying their time. In the foreground, a man with curly hair and a mustache is wearing glasses and a red and white checkered shirt. He is sitting on a beach chair and appears to be laughing or smiling, looking off to the side. In the background, there are other people sitting on the beach, some under umbrellas, and others lying on towels. The beach is populated with various beachgoers, and the atmosphere seems relaxed and leisurely. The overall scene conveys a sense of a typical day at the beach with people engaging in typical beach activities.",
"lora_url": "",
"scheduler": "DPMSolverMultistepScheduler",
"flow_shift": 9,
"frame_rate": 16,
"num_frames": 66,
"enhance_end": 1,
"enhance_start": 0,
"force_offload": true,
"lora_strength": 1,
"enhance_double": true,
"enhance_single": true,
"enhance_weight": 0.3,
"guidance_scale": 6,
"denoise_strength": 1
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Add a payment method to run this model.
By signing in, you agree to our
terms of service and privacy policy
{
"completed_at": "2025-01-24T05:06:11.099198Z",
"created_at": "2025-01-24T05:03:01.072000Z",
"data_removed": false,
"error": null,
"id": "2c9sprsbt1rmc0cmjtrsnmqnq8",
"input": {
"crf": 19,
"seed": 12345,
"steps": 50,
"width": 640,
"height": 360,
"prompt": "A video in the style of HR, HR The video clip depicts a beach scene with several people enjoying their time. In the foreground, a man with curly hair and a mustache is wearing glasses and a red and white checkered shirt. He is sitting on a beach chair and appears to be laughing or smiling, looking off to the side. In the background, there are other people sitting on the beach, some under umbrellas, and others lying on towels. The beach is populated with various beachgoers, and the atmosphere seems relaxed and leisurely. The overall scene conveys a sense of a typical day at the beach with people engaging in typical beach activities.",
"lora_url": "",
"scheduler": "DPMSolverMultistepScheduler",
"flow_shift": 9,
"frame_rate": 16,
"num_frames": 66,
"enhance_end": 1,
"enhance_start": 0,
"force_offload": true,
"lora_strength": 1,
"enhance_double": true,
"enhance_single": true,
"enhance_weight": 0.3,
"guidance_scale": 6,
"denoise_strength": 1
},
"logs": "Seed set to: 12345\n⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements\n⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\nExecuting node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.77it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.73it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.76it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.58it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.20it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 135\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: lora_comfyui with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (368, 640, 65)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\n[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])\n[ComfyUI]\n[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]\n[ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.42s/it]\n[ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.06s/it]\n[ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.17s/it]\n[ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.22s/it]\n[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it]\n[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it]\n[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.28s/it]\n[ComfyUI] 16%|█▌ | 8/50 [00:18<01:36, 2.30s/it]\n[ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.30s/it]\n[ComfyUI] 20%|██ | 10/50 [00:22<01:32, 2.30s/it]\n[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it]\n[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it]\n[ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it]\n[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]\n[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it]\n[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]\n[ComfyUI] 34%|███▍ | 17/50 [00:38<01:16, 2.30s/it]\n[ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it]\n[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]\n[ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it]\n[ComfyUI] 42%|████▏ | 21/50 [00:48<01:06, 2.30s/it]\n[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]\n[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]\n[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]\n[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it]\n[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]\n[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:53, 2.30s/it]\n[ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it]\n[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]\n[ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it]\n[ComfyUI] 62%|██████▏ | 31/50 [01:11<00:43, 2.30s/it]\n[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]\n[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.31s/it]\n[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.31s/it]\n[ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it]\n[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]\n[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]\n[ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.31s/it]\n[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]\n[ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.31s/it]\n[ComfyUI] 82%|████████▏ | 41/50 [01:34<00:20, 2.31s/it]\n[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.31s/it]\n[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.31s/it]\n[ComfyUI] 88%|████████▊ | 44/50 [01:41<00:13, 2.30s/it]\n[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it]\n[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]\n[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]\n[ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it]\n[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]\n[ComfyUI] Allocated memory: memory=12.760 GB\n[ComfyUI] Max allocated memory: max_memory=15.559 GB\n[ComfyUI] Max reserved memory: max_reserved=16.875 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.11it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.48it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 84.36it/s]\n[ComfyUI] Prompt executed in 148.54 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4",
"metrics": {
"predict_time": 163.065343157,
"total_time": 190.027198
},
"output": "https://replicate.delivery/xezq/xPdy1MCEzqreXynpyUo3WHHijdOFw4J0dgQHZF2bUIyhjGEKA/HunyuanVideo_00001.mp4",
"started_at": "2025-01-24T05:03:28.033855Z",
"status": "succeeded",
"urls": {
"stream": "https://stream.replicate.com/v1/files/bsvm-6qlumqsy2avdszyunijc6yintvlh6n6vr7yektp6rlrl6q6nt3ka",
"get": "https://api.replicate.com/v1/predictions/2c9sprsbt1rmc0cmjtrsnmqnq8",
"cancel": "https://api.replicate.com/v1/predictions/2c9sprsbt1rmc0cmjtrsnmqnq8/cancel"
},
"version": "a4289ea2d95ec71b94fcbf243d47e406b473b93e5bb700c5ec8966730d8ca63d"
}
Seed set to: 12345
⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements
⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements
Checking inputs
====================================
Checking weights
✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae
✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models
====================================
Running workflow
[ComfyUI] got prompt
Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader
Executing node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo
[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14
Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder
[ComfyUI] Text encoder to dtype: torch.float16
[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14
[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
[ComfyUI]
[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.77it/s]
[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.73it/s]
[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.76it/s]
[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.58it/s]
[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.20it/s]
[ComfyUI] Text encoder to dtype: torch.float16
[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode
[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 135
[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77
Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect
Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader
[ComfyUI] model_type FLOW
[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
[ComfyUI] Using accelerate to load and assign model weights to device...
[ComfyUI] Loading LoRA: lora_comfyui with strength: 1.0
[ComfyUI] Requested to load HyVideoModel
[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True
[ComfyUI] Input (height, width, video_length) = (368, 640, 65)
Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler
[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps
[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])
[ComfyUI]
[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]
[ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.42s/it]
[ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.06s/it]
[ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.17s/it]
[ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.22s/it]
[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it]
[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it]
[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.28s/it]
[ComfyUI] 16%|█▌ | 8/50 [00:18<01:36, 2.30s/it]
[ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.30s/it]
[ComfyUI] 20%|██ | 10/50 [00:22<01:32, 2.30s/it]
[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it]
[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it]
[ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it]
[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]
[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it]
[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]
[ComfyUI] 34%|███▍ | 17/50 [00:38<01:16, 2.30s/it]
[ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it]
[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]
[ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it]
[ComfyUI] 42%|████▏ | 21/50 [00:48<01:06, 2.30s/it]
[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]
[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]
[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]
[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it]
[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]
[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:53, 2.30s/it]
[ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it]
[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]
[ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it]
[ComfyUI] 62%|██████▏ | 31/50 [01:11<00:43, 2.30s/it]
[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]
[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.31s/it]
[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.31s/it]
[ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it]
[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]
[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]
[ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.31s/it]
[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]
[ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.31s/it]
[ComfyUI] 82%|████████▏ | 41/50 [01:34<00:20, 2.31s/it]
[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.31s/it]
[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.31s/it]
[ComfyUI] 88%|████████▊ | 44/50 [01:41<00:13, 2.30s/it]
[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it]
[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]
[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]
[ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it]
[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]
[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]
[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]
[ComfyUI] Allocated memory: memory=12.760 GB
[ComfyUI] Max allocated memory: max_memory=15.559 GB
[ComfyUI] Max reserved memory: max_reserved=16.875 GB
Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode
[ComfyUI]
[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]
[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it]
[ComfyUI]
[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]
[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.11it/s]
[ComfyUI]
[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]
[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.48it/s]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s]
[ComfyUI]
[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]
Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine
[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 84.36it/s]
[ComfyUI] Prompt executed in 148.54 seconds
outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}
====================================
HunyuanVideo_00001.png
HunyuanVideo_00001.mp4
This model runs on Nvidia H100 GPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
Seed set to: 12345
⚠️ Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements
⚠️ Adjusted frame count from 66 to 65 to satisfy model requirements
Checking inputs
====================================
Checking weights
✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae
✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models
====================================
Running workflow
[ComfyUI] got prompt
Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader
Executing node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo
[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14
Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder
[ComfyUI] Text encoder to dtype: torch.float16
[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14
[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
[ComfyUI]
[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.77it/s]
[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.73it/s]
[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.76it/s]
[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.58it/s]
[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.20it/s]
[ComfyUI] Text encoder to dtype: torch.float16
[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode
[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 135
[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77
Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect
Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader
[ComfyUI] model_type FLOW
[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
[ComfyUI] Using accelerate to load and assign model weights to device...
[ComfyUI] Loading LoRA: lora_comfyui with strength: 1.0
[ComfyUI] Requested to load HyVideoModel
[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True
[ComfyUI] Input (height, width, video_length) = (368, 640, 65)
Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler
[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps
[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])
[ComfyUI]
[ComfyUI] 0%| | 0/50 [00:00<?, ?it/s]
[ComfyUI] 2%|▏ | 1/50 [00:02<01:58, 2.42s/it]
[ComfyUI] 4%|▍ | 2/50 [00:04<01:39, 2.06s/it]
[ComfyUI] 6%|▌ | 3/50 [00:06<01:41, 2.17s/it]
[ComfyUI] 8%|▊ | 4/50 [00:08<01:42, 2.22s/it]
[ComfyUI] 10%|█ | 5/50 [00:11<01:41, 2.25s/it]
[ComfyUI] 12%|█▏ | 6/50 [00:13<01:39, 2.27s/it]
[ComfyUI] 14%|█▍ | 7/50 [00:15<01:37, 2.28s/it]
[ComfyUI] 16%|█▌ | 8/50 [00:18<01:36, 2.30s/it]
[ComfyUI] 18%|█▊ | 9/50 [00:20<01:34, 2.30s/it]
[ComfyUI] 20%|██ | 10/50 [00:22<01:32, 2.30s/it]
[ComfyUI] 22%|██▏ | 11/50 [00:24<01:29, 2.30s/it]
[ComfyUI] 24%|██▍ | 12/50 [00:27<01:27, 2.30s/it]
[ComfyUI] 26%|██▌ | 13/50 [00:29<01:25, 2.30s/it]
[ComfyUI] 28%|██▊ | 14/50 [00:31<01:22, 2.30s/it]
[ComfyUI] 30%|███ | 15/50 [00:34<01:20, 2.30s/it]
[ComfyUI] 32%|███▏ | 16/50 [00:36<01:18, 2.30s/it]
[ComfyUI] 34%|███▍ | 17/50 [00:38<01:16, 2.30s/it]
[ComfyUI] 36%|███▌ | 18/50 [00:41<01:13, 2.30s/it]
[ComfyUI] 38%|███▊ | 19/50 [00:43<01:11, 2.30s/it]
[ComfyUI] 40%|████ | 20/50 [00:45<01:09, 2.30s/it]
[ComfyUI] 42%|████▏ | 21/50 [00:48<01:06, 2.30s/it]
[ComfyUI] 44%|████▍ | 22/50 [00:50<01:04, 2.30s/it]
[ComfyUI] 46%|████▌ | 23/50 [00:52<01:02, 2.30s/it]
[ComfyUI] 48%|████▊ | 24/50 [00:54<00:59, 2.30s/it]
[ComfyUI] 50%|█████ | 25/50 [00:57<00:57, 2.30s/it]
[ComfyUI] 52%|█████▏ | 26/50 [00:59<00:55, 2.30s/it]
[ComfyUI] 54%|█████▍ | 27/50 [01:01<00:53, 2.30s/it]
[ComfyUI] 56%|█████▌ | 28/50 [01:04<00:50, 2.30s/it]
[ComfyUI] 58%|█████▊ | 29/50 [01:06<00:48, 2.30s/it]
[ComfyUI] 60%|██████ | 30/50 [01:08<00:46, 2.30s/it]
[ComfyUI] 62%|██████▏ | 31/50 [01:11<00:43, 2.30s/it]
[ComfyUI] 64%|██████▍ | 32/50 [01:13<00:41, 2.30s/it]
[ComfyUI] 66%|██████▌ | 33/50 [01:15<00:39, 2.31s/it]
[ComfyUI] 68%|██████▊ | 34/50 [01:17<00:36, 2.31s/it]
[ComfyUI] 70%|███████ | 35/50 [01:20<00:34, 2.30s/it]
[ComfyUI] 72%|███████▏ | 36/50 [01:22<00:32, 2.30s/it]
[ComfyUI] 74%|███████▍ | 37/50 [01:24<00:29, 2.30s/it]
[ComfyUI] 76%|███████▌ | 38/50 [01:27<00:27, 2.31s/it]
[ComfyUI] 78%|███████▊ | 39/50 [01:29<00:25, 2.30s/it]
[ComfyUI] 80%|████████ | 40/50 [01:31<00:23, 2.31s/it]
[ComfyUI] 82%|████████▏ | 41/50 [01:34<00:20, 2.31s/it]
[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18, 2.31s/it]
[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16, 2.31s/it]
[ComfyUI] 88%|████████▊ | 44/50 [01:41<00:13, 2.30s/it]
[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11, 2.30s/it]
[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09, 2.30s/it]
[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06, 2.30s/it]
[ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04, 2.30s/it]
[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02, 2.30s/it]
[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]
[ComfyUI] 100%|██████████| 50/50 [01:54<00:00, 2.30s/it]
[ComfyUI] Allocated memory: memory=12.760 GB
[ComfyUI] Max allocated memory: max_memory=15.559 GB
[ComfyUI] Max reserved memory: max_reserved=16.875 GB
Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode
[ComfyUI]
[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]
[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:01<00:01, 1.50s/it]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.25s/it]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00, 1.29s/it]
[ComfyUI]
[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]
[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.11it/s]
[ComfyUI]
[ComfyUI] Decoding rows: 0%| | 0/2 [00:00<?, ?it/s]
[ComfyUI] Decoding rows: 50%|█████ | 1/2 [00:00<00:00, 2.48it/s]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.99it/s]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00, 2.90it/s]
[ComfyUI]
[ComfyUI] Blending tiles: 0%| | 0/2 [00:00<?, ?it/s]
Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine
[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 84.36it/s]
[ComfyUI] Prompt executed in 148.54 seconds
outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}
====================================
HunyuanVideo_00001.png
HunyuanVideo_00001.mp4