zsxkib / hunyuan-video-lora-rose-number-one-girl
- Public
- 18 runs
-
H100
- Fine-tune
Prediction
zsxkib/hunyuan-video-lora-rose-number-one-girl:5642ffd79441e200e3d897438168a6f074e6510ebd6a0af2ef6fcb0eb20dd2d8ID3h5vh7xgjdrm80cm86asby7b4rStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- crf
- 19
- steps
- 30
- width
- 512
- height
- 512
- prompt
- In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera
- lora_url
- flow_shift
- 9
- frame_rate
- 15
- num_frames
- 85
- force_offload
- lora_strength
- 1
- guidance_scale
- 6
- denoise_strength
- 1
{ "crf": 19, "steps": 30, "width": 512, "height": 512, "prompt": "In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 85, "force_offload": true, "lora_strength": 1, "guidance_scale": 6, "denoise_strength": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run zsxkib/hunyuan-video-lora-rose-number-one-girl using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "zsxkib/hunyuan-video-lora-rose-number-one-girl:5642ffd79441e200e3d897438168a6f074e6510ebd6a0af2ef6fcb0eb20dd2d8", { input: { crf: 19, steps: 30, width: 512, height: 512, prompt: "In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera", lora_url: "", flow_shift: 9, frame_rate: 15, num_frames: 85, force_offload: true, lora_strength: 1, guidance_scale: 6, denoise_strength: 1 } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run zsxkib/hunyuan-video-lora-rose-number-one-girl using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "zsxkib/hunyuan-video-lora-rose-number-one-girl:5642ffd79441e200e3d897438168a6f074e6510ebd6a0af2ef6fcb0eb20dd2d8", input={ "crf": 19, "steps": 30, "width": 512, "height": 512, "prompt": "In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 85, "force_offload": True, "lora_strength": 1, "guidance_scale": 6, "denoise_strength": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run zsxkib/hunyuan-video-lora-rose-number-one-girl using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "zsxkib/hunyuan-video-lora-rose-number-one-girl:5642ffd79441e200e3d897438168a6f074e6510ebd6a0af2ef6fcb0eb20dd2d8", "input": { "crf": 19, "steps": 30, "width": 512, "height": 512, "prompt": "In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 85, "force_offload": true, "lora_strength": 1, "guidance_scale": 6, "denoise_strength": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-07T16:29:55.679877Z", "created_at": "2025-01-07T16:25:16.691000Z", "data_removed": false, "error": null, "id": "3h5vh7xgjdrm80cm86asby7b4r", "input": { "crf": 19, "steps": 30, "width": 512, "height": 512, "prompt": "In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera", "lora_url": "", "flow_shift": 9, "frame_rate": 15, "num_frames": 85, "force_offload": true, "lora_strength": 1, "guidance_scale": 6, "denoise_strength": 1 }, "logs": "Random seed set to: 537314953\nChecking inputs\n====================================\nChecking weights\n✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae\n✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader\n[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\nExecuting node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14\n[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n[ComfyUI]\n[ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\n[ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.72it/s]\n[ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.74it/s]\n[ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.77it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.61it/s]\n[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.21it/s]\n[ComfyUI] Text encoder to dtype: torch.float16\n[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\nExecuting node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode\n[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 52\n[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 54\nExecuting node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect\nExecuting node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader\n[ComfyUI] model_type FLOW\n[ComfyUI] Using accelerate to load and assign model weights to device...\n[ComfyUI] Loading LoRA: lora_comfyui with strength: 1.0\n[ComfyUI] Requested to load HyVideoModel\n[ComfyUI] Loading 1 new model\n[ComfyUI] loaded completely 0.0 12555.953247070312 True\n[ComfyUI] Input (height, width, video_length) = (512, 512, 85)\nExecuting node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler\n[ComfyUI] Sampling 85 frames in 22 latents at 512x512 with 30 inference steps\n[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:03<01:31, 3.15s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:07<01:45, 3.76s/it]\n[ComfyUI] 10%|█ | 3/30 [00:11<01:46, 3.96s/it]\n[ComfyUI] 13%|█▎ | 4/30 [00:15<01:45, 4.05s/it]\n[ComfyUI] 17%|█▋ | 5/30 [00:19<01:42, 4.10s/it]\n[ComfyUI] 20%|██ | 6/30 [00:24<01:39, 4.13s/it]\n[ComfyUI] 23%|██▎ | 7/30 [00:28<01:35, 4.14s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:32<01:31, 4.16s/it]\n[ComfyUI] 30%|███ | 9/30 [00:36<01:27, 4.16s/it]\n[ComfyUI] 33%|███▎ | 10/30 [00:40<01:23, 4.17s/it]\n[ComfyUI] 37%|███▋ | 11/30 [00:44<01:19, 4.17s/it]\n[ComfyUI] 40%|████ | 12/30 [00:49<01:15, 4.18s/it]\n[ComfyUI] 43%|████▎ | 13/30 [00:53<01:11, 4.18s/it]\n[ComfyUI] 47%|████▋ | 14/30 [00:57<01:06, 4.18s/it]\n[ComfyUI] 50%|█████ | 15/30 [01:01<01:02, 4.18s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:05<00:58, 4.18s/it]\n[ComfyUI] 57%|█████▋ | 17/30 [01:10<00:54, 4.18s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:14<00:50, 4.18s/it]\n[ComfyUI] 63%|██████▎ | 19/30 [01:18<00:46, 4.18s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:22<00:41, 4.19s/it]\n[ComfyUI] 70%|███████ | 21/30 [01:26<00:37, 4.19s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [01:31<00:33, 4.19s/it]\n[ComfyUI] 77%|███████▋ | 23/30 [01:35<00:29, 4.18s/it]\n[ComfyUI] 80%|████████ | 24/30 [01:39<00:25, 4.18s/it]\n[ComfyUI] 83%|████████▎ | 25/30 [01:43<00:20, 4.19s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [01:47<00:16, 4.19s/it]\n[ComfyUI] 90%|█████████ | 27/30 [01:51<00:12, 4.18s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [01:56<00:08, 4.18s/it]\n[ComfyUI] 97%|█████████▋| 29/30 [02:00<00:04, 4.19s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:04<00:00, 4.19s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:04<00:00, 4.15s/it]\n[ComfyUI] Allocated memory: memory=12.759 GB\n[ComfyUI] Max allocated memory: max_memory=17.507 GB\n[ComfyUI] Max reserved memory: max_reserved=19.375 GB\nExecuting node 5, title: HunyuanVideo Decode, class type: HyVideoDecode\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.04s/it]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.13s/it]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.07it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.02it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 28.26it/s]\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 28.22it/s]\n[ComfyUI]\n[ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s]\n[ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:00<00:01, 1.71it/s]\n[ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:01<00:00, 1.56it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:01<00:00, 1.89it/s]\n[ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:01<00:00, 1.80it/s]\n[ComfyUI]\n[ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s]\nExecuting node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 46.84it/s]\n[ComfyUI] Prompt executed in 160.41 seconds\noutputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}\n====================================\nHunyuanVideo_00001.png\nHunyuanVideo_00001.mp4", "metrics": { "predict_time": 172.564856796, "total_time": 278.988877 }, "output": "https://replicate.delivery/xezq/0D27GesdGB1vHqcvRRUqqvrVjq4oMQ7WRmRitHI2eDaDiwCUA/HunyuanVideo_00001.mp4", "started_at": "2025-01-07T16:27:03.115021Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-z4awd56pgk4ril2ekmgbxmoymst5xt2sphy5eynx53az2bvr2qka", "get": "https://api.replicate.com/v1/predictions/3h5vh7xgjdrm80cm86asby7b4r", "cancel": "https://api.replicate.com/v1/predictions/3h5vh7xgjdrm80cm86asby7b4r/cancel" }, "version": "5642ffd79441e200e3d897438168a6f074e6510ebd6a0af2ef6fcb0eb20dd2d8" }
Generated inRandom seed set to: 537314953 Checking inputs ==================================== Checking weights ✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae ✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader [ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14 [ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer [ComfyUI] [ComfyUI] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] [ComfyUI] Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:01, 1.72it/s] [ComfyUI] Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.74it/s] [ComfyUI] Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 1.77it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.61it/s] [ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.21it/s] [ComfyUI] Text encoder to dtype: torch.float16 [ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode [ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 52 [ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 54 Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader [ComfyUI] model_type FLOW [ComfyUI] Using accelerate to load and assign model weights to device... [ComfyUI] Loading LoRA: lora_comfyui with strength: 1.0 [ComfyUI] Requested to load HyVideoModel [ComfyUI] Loading 1 new model [ComfyUI] loaded completely 0.0 12555.953247070312 True [ComfyUI] Input (height, width, video_length) = (512, 512, 85) Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler [ComfyUI] Sampling 85 frames in 22 latents at 512x512 with 30 inference steps [ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])[ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:03<01:31, 3.15s/it] [ComfyUI] 7%|▋ | 2/30 [00:07<01:45, 3.76s/it] [ComfyUI] 10%|█ | 3/30 [00:11<01:46, 3.96s/it] [ComfyUI] 13%|█▎ | 4/30 [00:15<01:45, 4.05s/it] [ComfyUI] 17%|█▋ | 5/30 [00:19<01:42, 4.10s/it] [ComfyUI] 20%|██ | 6/30 [00:24<01:39, 4.13s/it] [ComfyUI] 23%|██▎ | 7/30 [00:28<01:35, 4.14s/it] [ComfyUI] 27%|██▋ | 8/30 [00:32<01:31, 4.16s/it] [ComfyUI] 30%|███ | 9/30 [00:36<01:27, 4.16s/it] [ComfyUI] 33%|███▎ | 10/30 [00:40<01:23, 4.17s/it] [ComfyUI] 37%|███▋ | 11/30 [00:44<01:19, 4.17s/it] [ComfyUI] 40%|████ | 12/30 [00:49<01:15, 4.18s/it] [ComfyUI] 43%|████▎ | 13/30 [00:53<01:11, 4.18s/it] [ComfyUI] 47%|████▋ | 14/30 [00:57<01:06, 4.18s/it] [ComfyUI] 50%|█████ | 15/30 [01:01<01:02, 4.18s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:05<00:58, 4.18s/it] [ComfyUI] 57%|█████▋ | 17/30 [01:10<00:54, 4.18s/it] [ComfyUI] 60%|██████ | 18/30 [01:14<00:50, 4.18s/it] [ComfyUI] 63%|██████▎ | 19/30 [01:18<00:46, 4.18s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:22<00:41, 4.19s/it] [ComfyUI] 70%|███████ | 21/30 [01:26<00:37, 4.19s/it] [ComfyUI] 73%|███████▎ | 22/30 [01:31<00:33, 4.19s/it] [ComfyUI] 77%|███████▋ | 23/30 [01:35<00:29, 4.18s/it] [ComfyUI] 80%|████████ | 24/30 [01:39<00:25, 4.18s/it] [ComfyUI] 83%|████████▎ | 25/30 [01:43<00:20, 4.19s/it] [ComfyUI] 87%|████████▋ | 26/30 [01:47<00:16, 4.19s/it] [ComfyUI] 90%|█████████ | 27/30 [01:51<00:12, 4.18s/it] [ComfyUI] 93%|█████████▎| 28/30 [01:56<00:08, 4.18s/it] [ComfyUI] 97%|█████████▋| 29/30 [02:00<00:04, 4.19s/it] [ComfyUI] 100%|██████████| 30/30 [02:04<00:00, 4.19s/it] [ComfyUI] 100%|██████████| 30/30 [02:04<00:00, 4.15s/it] [ComfyUI] Allocated memory: memory=12.759 GB [ComfyUI] Max allocated memory: max_memory=17.507 GB [ComfyUI] Max reserved memory: max_reserved=19.375 GB Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:01<00:02, 1.04s/it] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:02<00:01, 1.13s/it] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.07it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:02<00:00, 1.02it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 28.26it/s] [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 28.22it/s] [ComfyUI] [ComfyUI] Decoding rows: 0%| | 0/3 [00:00<?, ?it/s] [ComfyUI] Decoding rows: 33%|███▎ | 1/3 [00:00<00:01, 1.71it/s] [ComfyUI] Decoding rows: 67%|██████▋ | 2/3 [00:01<00:00, 1.56it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:01<00:00, 1.89it/s] [ComfyUI] Decoding rows: 100%|██████████| 3/3 [00:01<00:00, 1.80it/s] [ComfyUI] [ComfyUI] Blending tiles: 0%| | 0/3 [00:00<?, ?it/s] Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Blending tiles: 100%|██████████| 3/3 [00:00<00:00, 46.84it/s] [ComfyUI] Prompt executed in 160.41 seconds outputs: {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 15.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}} ==================================== HunyuanVideo_00001.png HunyuanVideo_00001.mp4
Want to make some of these yourself?
Run this model