Readme
This model doesn't have a readme.
WAI-NSFW-illustrious-SDXL v.90
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run littlemonsterzhang/wai90_sdxl using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"littlemonsterzhang/wai90_sdxl:820ce2c86370ccfac38e9126bcffc58d23348a0ab06179c4b2f49c444ef2d0a6",
{
input: {
prompt: "glowing eyes, streaked hair, glowing inner hair, glowing streaks, 1girl, jinhsi \\(wuthering_waves\\), white hair,blonde_eyes, large_breasts, (upper_body,close-up:1.4),Dynamic pose,串,((aqua and yellow:0.85) but Limited palette:1.2),//,Chinese architecture,Ink,hair flowers,bamboo forest,bamboo,//(long hair),small breasts,smile,((dragon)),Yellow lightning,(makeup,white eyeliner),white eyeshadow,white eyes,(long eyelashes),half-closed eyes,Dragon skirt,blush,holding sword,chinese sword,(dynamic angle:1.2),(hanfu:1.4),chinese clothes,transparent clothes,tassel,chinese knot,bare shoulders,kanzashi,draped silk,gold trim,wind,bokeh,scattered leaves,flying splashes,waterfall,splashed water,looking at viewer, \n,masterpiece,best quality,amazing quality,",
negative_prompt: "bad quality,worst quality,worst detail,sketch,censor,",
randomise_seeds: true
}
}
);
// To access the file URL:
console.log(output[0].url()); //=> "http://example.com"
// To write the file to disk:
fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run littlemonsterzhang/wai90_sdxl using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"littlemonsterzhang/wai90_sdxl:820ce2c86370ccfac38e9126bcffc58d23348a0ab06179c4b2f49c444ef2d0a6",
input={
"prompt": "glowing eyes, streaked hair, glowing inner hair, glowing streaks, 1girl, jinhsi \\(wuthering_waves\\), white hair,blonde_eyes, large_breasts, (upper_body,close-up:1.4),Dynamic pose,串,((aqua and yellow:0.85) but Limited palette:1.2),//,Chinese architecture,Ink,hair flowers,bamboo forest,bamboo,//(long hair),small breasts,smile,((dragon)),Yellow lightning,(makeup,white eyeliner),white eyeshadow,white eyes,(long eyelashes),half-closed eyes,Dragon skirt,blush,holding sword,chinese sword,(dynamic angle:1.2),(hanfu:1.4),chinese clothes,transparent clothes,tassel,chinese knot,bare shoulders,kanzashi,draped silk,gold trim,wind,bokeh,scattered leaves,flying splashes,waterfall,splashed water,looking at viewer, \n,masterpiece,best quality,amazing quality,",
"negative_prompt": "bad quality,worst quality,worst detail,sketch,censor,",
"randomise_seeds": True
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run littlemonsterzhang/wai90_sdxl using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "littlemonsterzhang/wai90_sdxl:820ce2c86370ccfac38e9126bcffc58d23348a0ab06179c4b2f49c444ef2d0a6",
"input": {
"prompt": "glowing eyes, streaked hair, glowing inner hair, glowing streaks, 1girl, jinhsi \\\\(wuthering_waves\\\\), white hair,blonde_eyes, large_breasts, (upper_body,close-up:1.4),Dynamic pose,串,((aqua and yellow:0.85) but Limited palette:1.2),//,Chinese architecture,Ink,hair flowers,bamboo forest,bamboo,//(long hair),small breasts,smile,((dragon)),Yellow lightning,(makeup,white eyeliner),white eyeshadow,white eyes,(long eyelashes),half-closed eyes,Dragon skirt,blush,holding sword,chinese sword,(dynamic angle:1.2),(hanfu:1.4),chinese clothes,transparent clothes,tassel,chinese knot,bare shoulders,kanzashi,draped silk,gold trim,wind,bokeh,scattered leaves,flying splashes,waterfall,splashed water,looking at viewer, \\n,masterpiece,best quality,amazing quality,",
"negative_prompt": "bad quality,worst quality,worst detail,sketch,censor,",
"randomise_seeds": true
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Add a payment method to run this model.
By signing in, you agree to our
terms of service and privacy policy
{
"completed_at": "2025-04-21T02:27:27.485953Z",
"created_at": "2025-04-21T02:25:38.036000Z",
"data_removed": false,
"error": null,
"id": "qcjrqt2ryhrma0cparna6tpetc",
"input": {
"prompt": "glowing eyes, streaked hair, glowing inner hair, glowing streaks, 1girl, jinhsi \\(wuthering_waves\\), white hair,blonde_eyes, large_breasts, (upper_body,close-up:1.4),Dynamic pose,串,((aqua and yellow:0.85) but Limited palette:1.2),//,Chinese architecture,Ink,hair flowers,bamboo forest,bamboo,//(long hair),small breasts,smile,((dragon)),Yellow lightning,(makeup,white eyeliner),white eyeshadow,white eyes,(long eyelashes),half-closed eyes,Dragon skirt,blush,holding sword,chinese sword,(dynamic angle:1.2),(hanfu:1.4),chinese clothes,transparent clothes,tassel,chinese knot,bare shoulders,kanzashi,draped silk,gold trim,wind,bokeh,scattered leaves,flying splashes,waterfall,splashed water,looking at viewer, \n,masterpiece,best quality,amazing quality,",
"negative_prompt": "bad quality,worst quality,worst detail,sketch,censor,",
"randomise_seeds": true
},
"logs": "【load_workflow path】 examples/api_workflows/sdxl_lora_work_api.json\n【handle_known_unsupported_nodes】done\nChecking inputs\n====================================\n【handle_inputs】done\nChecking weights\n【start check_weights if exists】 sdxl_vae.safetensors\ncheck_if_file sdxl_vae.safetensors exists: models/vae\n✅ sdxl_vae.safetensors exists in models/vae\n【start check_weights if exists】 waiNSFWIllustrious_v90.safetensors\ncheck_if_file waiNSFWIllustrious_v90.safetensors exists: models/checkpoints\n✅ waiNSFWIllustrious_v90.safetensors exists in models/checkpoints\n====================================\n【handle_weights】done\nRandomising seed to 3339211691\n------ Running workflow ------\n[ComfyUI] got prompt\n------ Running prompt_id ------\nExecuting node 41, title: 加载VAE, class type: VAELoader\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16\nExecuting node 5, title: 空Latent图像, class type: EmptyLatentImage\nExecuting node 40, title: Checkpoint加载器(简易), class type: CheckpointLoaderSimple\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type EPS\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16\n[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16\nExecuting node 7, title: CLIP文本编码, class type: CLIPTextEncode\n[ComfyUI] Requested to load SDXLClipModel\n[ComfyUI] loaded completely 43939.05 1560.802734375 True\nExecuting node 42, title: 设置CLIP最后一层, class type: CLIPSetLastLayer\nExecuting node 6, title: CLIP文本编码, class type: CLIPTextEncode\n[ComfyUI] Requested to load SDXLClipModel\nExecuting node 3, title: K采样器, class type: KSampler\n[ComfyUI] Requested to load SDXL\n[ComfyUI] loaded completely 42272.12215499878 4897.0483474731445 True\n[ComfyUI] 【item】: 9\n[ComfyUI] 【inputs】: {'filename_prefix': 'ComfyUI', 'images': ['8', 0]}\n[ComfyUI] 【class_type】: SaveImage\n[ComfyUI]\n[ComfyUI] 【obj_class】: <class 'nodes.SaveImage'>\n[ComfyUI] 【class_inputs】: {'required': {'images': ('IMAGE', {'tooltip': 'The images to save.'}), 'filename_prefix': ('STRING', {'default': 'ComfyUI', 'tooltip': 'The prefix for the file to save. This may include formatting information such as %date:yyyy-MM-dd% or %Empty Latent Image.width% to include values from nodes.'})}, 'hidden': {'prompt': 'PROMPT', 'extra_pnginfo': 'EXTRA_PNGINFO'}}\n[ComfyUI] 【valid_inputs】: {'filename_prefix', 'images'}\n[ComfyUI] 【item】: 8\n[ComfyUI] 【inputs】: {'samples': ['3', 0], 'vae': ['41', 0]}\n[ComfyUI] 【class_type】: VAEDecode\n[ComfyUI] 【obj_class】: <class 'nodes.VAEDecode'>\n[ComfyUI] 【class_inputs】: {'required': {'samples': ('LATENT', {'tooltip': 'The latent to be decoded.'}), 'vae': ('VAE', {'tooltip': 'The VAE model used for decoding the latent.'})}}\n[ComfyUI] 【valid_inputs】: {'samples', 'vae'}\n[ComfyUI] 【item】: 3\n[ComfyUI] 【inputs】: {'seed': 3339211691, 'steps': 26, 'cfg': 7, 'sampler_name': 'euler', 'scheduler': 'exponential', 'denoise': 1, 'model': ['40', 0], 'positive': ['6', 0], 'negative': ['7', 0], 'latent_image': ['5', 0]}\n[ComfyUI] 【class_type】: KSampler\n[ComfyUI] 【obj_class】: <class 'nodes.KSampler'>\n[ComfyUI] 【class_inputs】: {'required': {'model': ('MODEL', {'tooltip': 'The model used for denoising the input latent.'}), 'seed': ('INT', {'default': 0, 'min': 0, 'max': 18446744073709551615, 'control_after_generate': True, 'tooltip': 'The random seed used for creating the noise.'}), 'steps': ('INT', {'default': 20, 'min': 1, 'max': 10000, 'tooltip': 'The number of steps used in the denoising process.'}), 'cfg': ('FLOAT', {'default': 8.0, 'min': 0.0, 'max': 100.0, 'step': 0.1, 'round': 0.01, 'tooltip': 'The Classifier-Free Guidance scale balances creativity and adherence to the prompt. Higher values result in images more closely matching the prompt however too high values will negatively impact quality.'}), 'sampler_name': (['euler', 'euler_cfg_pp', 'euler_ancestral', 'euler_ancestral_cfg_pp', 'heun', 'heunpp2', 'dpm_2', 'dpm_2_ancestral', 'lms', 'dpm_fast', 'dpm_adaptive', 'dpmpp_2s_ancestral', 'dpmpp_2s_ancestral_cfg_pp', 'dpmpp_sde', 'dpmpp_sde_gpu', 'dpmpp_2m', 'dpmpp_2m_cfg_pp', 'dpmpp_2m_sde', 'dpmpp_2m_sde_gpu', 'dpmpp_3m_sde', 'dpmpp_3m_sde_gpu', 'ddpm', 'lcm', 'ipndm', 'ipndm_v', 'deis', 'res_multistep', 'res_multistep_cfg_pp', 'res_multistep_ancestral', 'res_multistep_ancestral_cfg_pp', 'gradient_estimation', 'er_sde', 'ddim', 'uni_pc', 'uni_pc_bh2'], {'tooltip': 'The algorithm used when sampling, this can affect the quality, speed, and style of the generated output.'}), 'scheduler': (['normal', 'karras', 'exponential', 'sgm_uniform', 'simple', 'ddim_uniform', 'beta', 'linear_quadratic', 'kl_optimal'], {'tooltip': 'The scheduler controls how noise is gradually removed to form the image.'}), 'positive': ('CONDITIONING', {'tooltip': 'The conditioning describing the attributes you want to include in the image.'}), 'negative': ('CONDITIONING', {'tooltip': 'The conditioning describing the attributes you want to exclude from the image.'}), 'latent_image': ('LATENT', {'tooltip': 'The latent image to denoise.'}), 'denoise': ('FLOAT', {'default': 1.0, 'min': 0.0, 'max': 1.0, 'step': 0.01, 'tooltip': 'The amount of denoising applied, lower values will maintain the structure of the initial image allowing for image to image sampling.'})}}\n[ComfyUI] 【valid_inputs】: {'model', 'seed', 'cfg', 'scheduler', 'latent_image', 'steps', 'negative', 'denoise', 'positive', 'sampler_name'}\n[ComfyUI] 【item】: 40\n[ComfyUI] 【inputs】: {'ckpt_name': 'waiNSFWIllustrious_v90.safetensors'}\n[ComfyUI] 【class_type】: CheckpointLoaderSimple\n[ComfyUI] 【obj_class】: <class 'nodes.CheckpointLoaderSimple'>\n[ComfyUI] 【class_inputs】: {'required': {'ckpt_name': (['waiNSFWIllustrious_v90.safetensors'], {'tooltip': 'The name of the checkpoint (model) to load.'})}}\n[ComfyUI] 【valid_inputs】: {'ckpt_name'}\n[ComfyUI] 【item】: 5\n[ComfyUI] 【inputs】: {'width': 768, 'height': 1280, 'batch_size': 1}\n[ComfyUI] 【class_type】: EmptyLatentImage\n[ComfyUI] 【obj_class】: <class 'nodes.EmptyLatentImage'>\n[ComfyUI] 【class_inputs】: {'required': {'width': ('INT', {'default': 512, 'min': 16, 'max': 16384, 'step': 8, 'tooltip': 'The width of the latent images in pixels.'}), 'height': ('INT', {'default': 512, 'min': 16, 'max': 16384, 'step': 8, 'tooltip': 'The height of the latent images in pixels.'}), 'batch_size': ('INT', {'default': 1, 'min': 1, 'max': 4096, 'tooltip': 'The number of latent images in the batch.'})}}\n[ComfyUI] 【valid_inputs】: {'height', 'batch_size', 'width'}\n[ComfyUI] 【item】: 7\n[ComfyUI] 【inputs】: {'text': 'bad quality,worst quality,worst detail,sketch,censor,', 'clip': ['40', 1]}\n[ComfyUI] 【class_type】: CLIPTextEncode\n[ComfyUI] 【obj_class】: <class 'nodes.CLIPTextEncode'>\n[ComfyUI] 【class_inputs】: {'required': {'text': (<IO.STRING: 'STRING'>, {'multiline': True, 'dynamicPrompts': True, 'tooltip': 'The text to be encoded.'}), 'clip': (<IO.CLIP: 'CLIP'>, {'tooltip': 'The CLIP model used for encoding the text.'})}}\n[ComfyUI] 【valid_inputs】: {'clip', 'text'}\n[ComfyUI] 【item】: 40\n[ComfyUI] 【item】: 6\n[ComfyUI] 【inputs】: {'text': 'glowing eyes, streaked hair, glowing inner hair, glowing streaks, 1girl, jinhsi \\\\(wuthering_waves\\\\), white hair,blonde_eyes, large_breasts, (upper_body,close-up:1.4),Dynamic pose,串,((aqua and yellow:0.85) but Limited palette:1.2),//,Chinese architecture,Ink,hair flowers,bamboo forest,bamboo,//(long hair),small breasts,smile,((dragon)),Yellow lightning,(makeup,white eyeliner),white eyeshadow,white eyes,(long eyelashes),half-closed eyes,Dragon skirt,blush,holding sword,chinese sword,(dynamic angle:1.2),(hanfu:1.4),chinese clothes,transparent clothes,tassel,chinese knot,bare shoulders,kanzashi,draped silk,gold trim,wind,bokeh,scattered leaves,flying splashes,waterfall,splashed water,looking at viewer, \\n,masterpiece,best quality,amazing quality,', 'clip': ['42', 0]}\n[ComfyUI] 【class_type】: CLIPTextEncode\n[ComfyUI] 【obj_class】: <class 'nodes.CLIPTextEncode'>\n[ComfyUI] 【class_inputs】: {'required': {'text': (<IO.STRING: 'STRING'>, {'multiline': True, 'dynamicPrompts': True, 'tooltip': 'The text to be encoded.'}), 'clip': (<IO.CLIP: 'CLIP'>, {'tooltip': 'The CLIP model used for encoding the text.'})}}\n[ComfyUI] 【valid_inputs】: {'clip', 'text'}\n[ComfyUI] 【item】: 42\n[ComfyUI] 【inputs】: {'stop_at_clip_layer': -2, 'clip': ['40', 1]}\n[ComfyUI] 【class_type】: CLIPSetLastLayer\n[ComfyUI] 【obj_class】: <class 'nodes.CLIPSetLastLayer'>\n[ComfyUI] 【class_inputs】: {'required': {'clip': ('CLIP',), 'stop_at_clip_layer': ('INT', {'default': -1, 'min': -24, 'max': -1, 'step': 1})}}\n[ComfyUI] 【valid_inputs】: {'clip', 'stop_at_clip_layer'}\n[ComfyUI] 【item】: 40\n[ComfyUI] 【item】: 41\n[ComfyUI] 【inputs】: {'vae_name': 'sdxl_vae.safetensors'}\n[ComfyUI] 【class_type】: VAELoader\n[ComfyUI] 【obj_class】: <class 'nodes.VAELoader'>\n[ComfyUI] 【class_inputs】: {'required': {'vae_name': (['sdxl_vae.safetensors'],)}}\n[ComfyUI] 【valid_inputs】: {'vae_name'}\n[ComfyUI] 0%| | 0/26 [00:00<?, ?it/s]\n[ComfyUI] 4%|▍ | 1/26 [00:00<00:07, 3.36it/s]\n[ComfyUI] 12%|█▏ | 3/26 [00:00<00:03, 7.28it/s]\n[ComfyUI] 15%|█▌ | 4/26 [00:00<00:02, 7.86it/s]\n[ComfyUI] 19%|█▉ | 5/26 [00:00<00:02, 8.27it/s]\n[ComfyUI] 23%|██▎ | 6/26 [00:00<00:02, 8.56it/s]\n[ComfyUI] 27%|██▋ | 7/26 [00:00<00:02, 8.76it/s]\n[ComfyUI] 31%|███ | 8/26 [00:01<00:02, 8.90it/s]\n[ComfyUI] 35%|███▍ | 9/26 [00:01<00:01, 9.00it/s]\n[ComfyUI] 38%|███▊ | 10/26 [00:01<00:01, 9.06it/s]\n[ComfyUI] 42%|████▏ | 11/26 [00:01<00:01, 9.11it/s]\n[ComfyUI] 46%|████▌ | 12/26 [00:01<00:01, 9.10it/s]\n[ComfyUI] 50%|█████ | 13/26 [00:01<00:01, 9.13it/s]\n[ComfyUI] 54%|█████▍ | 14/26 [00:01<00:01, 9.15it/s]\n[ComfyUI] 58%|█████▊ | 15/26 [00:01<00:01, 9.15it/s]\n[ComfyUI] 62%|██████▏ | 16/26 [00:01<00:01, 9.15it/s]\n[ComfyUI] 65%|██████▌ | 17/26 [00:01<00:00, 9.18it/s]\n[ComfyUI] 69%|██████▉ | 18/26 [00:02<00:00, 9.17it/s]\n[ComfyUI] 73%|███████▎ | 19/26 [00:02<00:00, 9.18it/s]\n[ComfyUI] 77%|███████▋ | 20/26 [00:02<00:00, 9.19it/s]\n[ComfyUI] 81%|████████ | 21/26 [00:02<00:00, 9.21it/s]\n[ComfyUI] 85%|████████▍ | 22/26 [00:02<00:00, 9.21it/s]\n[ComfyUI] 88%|████████▊ | 23/26 [00:02<00:00, 9.19it/s]\n[ComfyUI] 92%|█████████▏| 24/26 [00:02<00:00, 9.18it/s]\n[ComfyUI] 96%|█████████▌| 25/26 [00:02<00:00, 9.19it/s]\n[ComfyUI] 100%|██████████| 26/26 [00:02<00:00, 9.19it/s]\n[ComfyUI] 100%|██████████| 26/26 [00:02<00:00, 8.78it/s]\n[ComfyUI] Requested to load AutoencoderKL\nExecuting node 8, title: VAE解码, class type: VAEDecode\n[ComfyUI] loaded completely 34094.475997924805 159.55708122253418 True\nExecuting node 9, title: 保存图像, class type: SaveImage\n[ComfyUI] Prompt executed in 5.47 seconds\noutputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\nComfyUI_00001_.png",
"metrics": {
"predict_time": 5.686638894,
"total_time": 109.449953
},
"output": [
"https://replicate.delivery/xezq/0oTKgffeL4NPkoALzpVIj0fQXCMZcU8CQ1lPZWK7cFL9wXTSB/ComfyUI_00001_.png"
],
"started_at": "2025-04-21T02:27:21.799314Z",
"status": "succeeded",
"urls": {
"stream": "https://stream.replicate.com/v1/files/bcwr-xzr2bqlnvum5w6ozfrfjzjgwh3wp2dlwvrnwfvbtp576yr2yvvca",
"get": "https://api.replicate.com/v1/predictions/qcjrqt2ryhrma0cparna6tpetc",
"cancel": "https://api.replicate.com/v1/predictions/qcjrqt2ryhrma0cparna6tpetc/cancel"
},
"version": "820ce2c86370ccfac38e9126bcffc58d23348a0ab06179c4b2f49c444ef2d0a6"
}
【load_workflow path】 examples/api_workflows/sdxl_lora_work_api.json
【handle_known_unsupported_nodes】done
Checking inputs
====================================
【handle_inputs】done
Checking weights
【start check_weights if exists】 sdxl_vae.safetensors
check_if_file sdxl_vae.safetensors exists: models/vae
✅ sdxl_vae.safetensors exists in models/vae
【start check_weights if exists】 waiNSFWIllustrious_v90.safetensors
check_if_file waiNSFWIllustrious_v90.safetensors exists: models/checkpoints
✅ waiNSFWIllustrious_v90.safetensors exists in models/checkpoints
====================================
【handle_weights】done
Randomising seed to 3339211691
------ Running workflow ------
[ComfyUI] got prompt
------ Running prompt_id ------
Executing node 41, title: 加载VAE, class type: VAELoader
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Executing node 5, title: 空Latent图像, class type: EmptyLatentImage
Executing node 40, title: Checkpoint加载器(简易), class type: CheckpointLoaderSimple
[ComfyUI] model weight dtype torch.float16, manual cast: None
[ComfyUI] model_type EPS
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Executing node 7, title: CLIP文本编码, class type: CLIPTextEncode
[ComfyUI] Requested to load SDXLClipModel
[ComfyUI] loaded completely 43939.05 1560.802734375 True
Executing node 42, title: 设置CLIP最后一层, class type: CLIPSetLastLayer
Executing node 6, title: CLIP文本编码, class type: CLIPTextEncode
[ComfyUI] Requested to load SDXLClipModel
Executing node 3, title: K采样器, class type: KSampler
[ComfyUI] Requested to load SDXL
[ComfyUI] loaded completely 42272.12215499878 4897.0483474731445 True
[ComfyUI] 【item】: 9
[ComfyUI] 【inputs】: {'filename_prefix': 'ComfyUI', 'images': ['8', 0]}
[ComfyUI] 【class_type】: SaveImage
[ComfyUI]
[ComfyUI] 【obj_class】: <class 'nodes.SaveImage'>
[ComfyUI] 【class_inputs】: {'required': {'images': ('IMAGE', {'tooltip': 'The images to save.'}), 'filename_prefix': ('STRING', {'default': 'ComfyUI', 'tooltip': 'The prefix for the file to save. This may include formatting information such as %date:yyyy-MM-dd% or %Empty Latent Image.width% to include values from nodes.'})}, 'hidden': {'prompt': 'PROMPT', 'extra_pnginfo': 'EXTRA_PNGINFO'}}
[ComfyUI] 【valid_inputs】: {'filename_prefix', 'images'}
[ComfyUI] 【item】: 8
[ComfyUI] 【inputs】: {'samples': ['3', 0], 'vae': ['41', 0]}
[ComfyUI] 【class_type】: VAEDecode
[ComfyUI] 【obj_class】: <class 'nodes.VAEDecode'>
[ComfyUI] 【class_inputs】: {'required': {'samples': ('LATENT', {'tooltip': 'The latent to be decoded.'}), 'vae': ('VAE', {'tooltip': 'The VAE model used for decoding the latent.'})}}
[ComfyUI] 【valid_inputs】: {'samples', 'vae'}
[ComfyUI] 【item】: 3
[ComfyUI] 【inputs】: {'seed': 3339211691, 'steps': 26, 'cfg': 7, 'sampler_name': 'euler', 'scheduler': 'exponential', 'denoise': 1, 'model': ['40', 0], 'positive': ['6', 0], 'negative': ['7', 0], 'latent_image': ['5', 0]}
[ComfyUI] 【class_type】: KSampler
[ComfyUI] 【obj_class】: <class 'nodes.KSampler'>
[ComfyUI] 【class_inputs】: {'required': {'model': ('MODEL', {'tooltip': 'The model used for denoising the input latent.'}), 'seed': ('INT', {'default': 0, 'min': 0, 'max': 18446744073709551615, 'control_after_generate': True, 'tooltip': 'The random seed used for creating the noise.'}), 'steps': ('INT', {'default': 20, 'min': 1, 'max': 10000, 'tooltip': 'The number of steps used in the denoising process.'}), 'cfg': ('FLOAT', {'default': 8.0, 'min': 0.0, 'max': 100.0, 'step': 0.1, 'round': 0.01, 'tooltip': 'The Classifier-Free Guidance scale balances creativity and adherence to the prompt. Higher values result in images more closely matching the prompt however too high values will negatively impact quality.'}), 'sampler_name': (['euler', 'euler_cfg_pp', 'euler_ancestral', 'euler_ancestral_cfg_pp', 'heun', 'heunpp2', 'dpm_2', 'dpm_2_ancestral', 'lms', 'dpm_fast', 'dpm_adaptive', 'dpmpp_2s_ancestral', 'dpmpp_2s_ancestral_cfg_pp', 'dpmpp_sde', 'dpmpp_sde_gpu', 'dpmpp_2m', 'dpmpp_2m_cfg_pp', 'dpmpp_2m_sde', 'dpmpp_2m_sde_gpu', 'dpmpp_3m_sde', 'dpmpp_3m_sde_gpu', 'ddpm', 'lcm', 'ipndm', 'ipndm_v', 'deis', 'res_multistep', 'res_multistep_cfg_pp', 'res_multistep_ancestral', 'res_multistep_ancestral_cfg_pp', 'gradient_estimation', 'er_sde', 'ddim', 'uni_pc', 'uni_pc_bh2'], {'tooltip': 'The algorithm used when sampling, this can affect the quality, speed, and style of the generated output.'}), 'scheduler': (['normal', 'karras', 'exponential', 'sgm_uniform', 'simple', 'ddim_uniform', 'beta', 'linear_quadratic', 'kl_optimal'], {'tooltip': 'The scheduler controls how noise is gradually removed to form the image.'}), 'positive': ('CONDITIONING', {'tooltip': 'The conditioning describing the attributes you want to include in the image.'}), 'negative': ('CONDITIONING', {'tooltip': 'The conditioning describing the attributes you want to exclude from the image.'}), 'latent_image': ('LATENT', {'tooltip': 'The latent image to denoise.'}), 'denoise': ('FLOAT', {'default': 1.0, 'min': 0.0, 'max': 1.0, 'step': 0.01, 'tooltip': 'The amount of denoising applied, lower values will maintain the structure of the initial image allowing for image to image sampling.'})}}
[ComfyUI] 【valid_inputs】: {'model', 'seed', 'cfg', 'scheduler', 'latent_image', 'steps', 'negative', 'denoise', 'positive', 'sampler_name'}
[ComfyUI] 【item】: 40
[ComfyUI] 【inputs】: {'ckpt_name': 'waiNSFWIllustrious_v90.safetensors'}
[ComfyUI] 【class_type】: CheckpointLoaderSimple
[ComfyUI] 【obj_class】: <class 'nodes.CheckpointLoaderSimple'>
[ComfyUI] 【class_inputs】: {'required': {'ckpt_name': (['waiNSFWIllustrious_v90.safetensors'], {'tooltip': 'The name of the checkpoint (model) to load.'})}}
[ComfyUI] 【valid_inputs】: {'ckpt_name'}
[ComfyUI] 【item】: 5
[ComfyUI] 【inputs】: {'width': 768, 'height': 1280, 'batch_size': 1}
[ComfyUI] 【class_type】: EmptyLatentImage
[ComfyUI] 【obj_class】: <class 'nodes.EmptyLatentImage'>
[ComfyUI] 【class_inputs】: {'required': {'width': ('INT', {'default': 512, 'min': 16, 'max': 16384, 'step': 8, 'tooltip': 'The width of the latent images in pixels.'}), 'height': ('INT', {'default': 512, 'min': 16, 'max': 16384, 'step': 8, 'tooltip': 'The height of the latent images in pixels.'}), 'batch_size': ('INT', {'default': 1, 'min': 1, 'max': 4096, 'tooltip': 'The number of latent images in the batch.'})}}
[ComfyUI] 【valid_inputs】: {'height', 'batch_size', 'width'}
[ComfyUI] 【item】: 7
[ComfyUI] 【inputs】: {'text': 'bad quality,worst quality,worst detail,sketch,censor,', 'clip': ['40', 1]}
[ComfyUI] 【class_type】: CLIPTextEncode
[ComfyUI] 【obj_class】: <class 'nodes.CLIPTextEncode'>
[ComfyUI] 【class_inputs】: {'required': {'text': (<IO.STRING: 'STRING'>, {'multiline': True, 'dynamicPrompts': True, 'tooltip': 'The text to be encoded.'}), 'clip': (<IO.CLIP: 'CLIP'>, {'tooltip': 'The CLIP model used for encoding the text.'})}}
[ComfyUI] 【valid_inputs】: {'clip', 'text'}
[ComfyUI] 【item】: 40
[ComfyUI] 【item】: 6
[ComfyUI] 【inputs】: {'text': 'glowing eyes, streaked hair, glowing inner hair, glowing streaks, 1girl, jinhsi \\(wuthering_waves\\), white hair,blonde_eyes, large_breasts, (upper_body,close-up:1.4),Dynamic pose,串,((aqua and yellow:0.85) but Limited palette:1.2),//,Chinese architecture,Ink,hair flowers,bamboo forest,bamboo,//(long hair),small breasts,smile,((dragon)),Yellow lightning,(makeup,white eyeliner),white eyeshadow,white eyes,(long eyelashes),half-closed eyes,Dragon skirt,blush,holding sword,chinese sword,(dynamic angle:1.2),(hanfu:1.4),chinese clothes,transparent clothes,tassel,chinese knot,bare shoulders,kanzashi,draped silk,gold trim,wind,bokeh,scattered leaves,flying splashes,waterfall,splashed water,looking at viewer, \n,masterpiece,best quality,amazing quality,', 'clip': ['42', 0]}
[ComfyUI] 【class_type】: CLIPTextEncode
[ComfyUI] 【obj_class】: <class 'nodes.CLIPTextEncode'>
[ComfyUI] 【class_inputs】: {'required': {'text': (<IO.STRING: 'STRING'>, {'multiline': True, 'dynamicPrompts': True, 'tooltip': 'The text to be encoded.'}), 'clip': (<IO.CLIP: 'CLIP'>, {'tooltip': 'The CLIP model used for encoding the text.'})}}
[ComfyUI] 【valid_inputs】: {'clip', 'text'}
[ComfyUI] 【item】: 42
[ComfyUI] 【inputs】: {'stop_at_clip_layer': -2, 'clip': ['40', 1]}
[ComfyUI] 【class_type】: CLIPSetLastLayer
[ComfyUI] 【obj_class】: <class 'nodes.CLIPSetLastLayer'>
[ComfyUI] 【class_inputs】: {'required': {'clip': ('CLIP',), 'stop_at_clip_layer': ('INT', {'default': -1, 'min': -24, 'max': -1, 'step': 1})}}
[ComfyUI] 【valid_inputs】: {'clip', 'stop_at_clip_layer'}
[ComfyUI] 【item】: 40
[ComfyUI] 【item】: 41
[ComfyUI] 【inputs】: {'vae_name': 'sdxl_vae.safetensors'}
[ComfyUI] 【class_type】: VAELoader
[ComfyUI] 【obj_class】: <class 'nodes.VAELoader'>
[ComfyUI] 【class_inputs】: {'required': {'vae_name': (['sdxl_vae.safetensors'],)}}
[ComfyUI] 【valid_inputs】: {'vae_name'}
[ComfyUI] 0%| | 0/26 [00:00<?, ?it/s]
[ComfyUI] 4%|▍ | 1/26 [00:00<00:07, 3.36it/s]
[ComfyUI] 12%|█▏ | 3/26 [00:00<00:03, 7.28it/s]
[ComfyUI] 15%|█▌ | 4/26 [00:00<00:02, 7.86it/s]
[ComfyUI] 19%|█▉ | 5/26 [00:00<00:02, 8.27it/s]
[ComfyUI] 23%|██▎ | 6/26 [00:00<00:02, 8.56it/s]
[ComfyUI] 27%|██▋ | 7/26 [00:00<00:02, 8.76it/s]
[ComfyUI] 31%|███ | 8/26 [00:01<00:02, 8.90it/s]
[ComfyUI] 35%|███▍ | 9/26 [00:01<00:01, 9.00it/s]
[ComfyUI] 38%|███▊ | 10/26 [00:01<00:01, 9.06it/s]
[ComfyUI] 42%|████▏ | 11/26 [00:01<00:01, 9.11it/s]
[ComfyUI] 46%|████▌ | 12/26 [00:01<00:01, 9.10it/s]
[ComfyUI] 50%|█████ | 13/26 [00:01<00:01, 9.13it/s]
[ComfyUI] 54%|█████▍ | 14/26 [00:01<00:01, 9.15it/s]
[ComfyUI] 58%|█████▊ | 15/26 [00:01<00:01, 9.15it/s]
[ComfyUI] 62%|██████▏ | 16/26 [00:01<00:01, 9.15it/s]
[ComfyUI] 65%|██████▌ | 17/26 [00:01<00:00, 9.18it/s]
[ComfyUI] 69%|██████▉ | 18/26 [00:02<00:00, 9.17it/s]
[ComfyUI] 73%|███████▎ | 19/26 [00:02<00:00, 9.18it/s]
[ComfyUI] 77%|███████▋ | 20/26 [00:02<00:00, 9.19it/s]
[ComfyUI] 81%|████████ | 21/26 [00:02<00:00, 9.21it/s]
[ComfyUI] 85%|████████▍ | 22/26 [00:02<00:00, 9.21it/s]
[ComfyUI] 88%|████████▊ | 23/26 [00:02<00:00, 9.19it/s]
[ComfyUI] 92%|█████████▏| 24/26 [00:02<00:00, 9.18it/s]
[ComfyUI] 96%|█████████▌| 25/26 [00:02<00:00, 9.19it/s]
[ComfyUI] 100%|██████████| 26/26 [00:02<00:00, 9.19it/s]
[ComfyUI] 100%|██████████| 26/26 [00:02<00:00, 8.78it/s]
[ComfyUI] Requested to load AutoencoderKL
Executing node 8, title: VAE解码, class type: VAEDecode
[ComfyUI] loaded completely 34094.475997924805 159.55708122253418 True
Executing node 9, title: 保存图像, class type: SaveImage
[ComfyUI] Prompt executed in 5.47 seconds
outputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}
====================================
ComfyUI_00001_.png
This model costs approximately $0.018 to run on Replicate, or 55 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.
This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 19 seconds. The predict time for this model varies significantly based on the inputs.
This model doesn't have a readme.
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
【load_workflow path】 examples/api_workflows/sdxl_lora_work_api.json
【handle_known_unsupported_nodes】done
Checking inputs
====================================
【handle_inputs】done
Checking weights
【start check_weights if exists】 sdxl_vae.safetensors
check_if_file sdxl_vae.safetensors exists: models/vae
✅ sdxl_vae.safetensors exists in models/vae
【start check_weights if exists】 waiNSFWIllustrious_v90.safetensors
check_if_file waiNSFWIllustrious_v90.safetensors exists: models/checkpoints
✅ waiNSFWIllustrious_v90.safetensors exists in models/checkpoints
====================================
【handle_weights】done
Randomising seed to 3339211691
------ Running workflow ------
[ComfyUI] got prompt
------ Running prompt_id ------
Executing node 41, title: 加载VAE, class type: VAELoader
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Executing node 5, title: 空Latent图像, class type: EmptyLatentImage
Executing node 40, title: Checkpoint加载器(简易), class type: CheckpointLoaderSimple
[ComfyUI] model weight dtype torch.float16, manual cast: None
[ComfyUI] model_type EPS
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Executing node 7, title: CLIP文本编码, class type: CLIPTextEncode
[ComfyUI] Requested to load SDXLClipModel
[ComfyUI] loaded completely 43939.05 1560.802734375 True
Executing node 42, title: 设置CLIP最后一层, class type: CLIPSetLastLayer
Executing node 6, title: CLIP文本编码, class type: CLIPTextEncode
[ComfyUI] Requested to load SDXLClipModel
Executing node 3, title: K采样器, class type: KSampler
[ComfyUI] Requested to load SDXL
[ComfyUI] loaded completely 42272.12215499878 4897.0483474731445 True
[ComfyUI] 【item】: 9
[ComfyUI] 【inputs】: {'filename_prefix': 'ComfyUI', 'images': ['8', 0]}
[ComfyUI] 【class_type】: SaveImage
[ComfyUI]
[ComfyUI] 【obj_class】: <class 'nodes.SaveImage'>
[ComfyUI] 【class_inputs】: {'required': {'images': ('IMAGE', {'tooltip': 'The images to save.'}), 'filename_prefix': ('STRING', {'default': 'ComfyUI', 'tooltip': 'The prefix for the file to save. This may include formatting information such as %date:yyyy-MM-dd% or %Empty Latent Image.width% to include values from nodes.'})}, 'hidden': {'prompt': 'PROMPT', 'extra_pnginfo': 'EXTRA_PNGINFO'}}
[ComfyUI] 【valid_inputs】: {'filename_prefix', 'images'}
[ComfyUI] 【item】: 8
[ComfyUI] 【inputs】: {'samples': ['3', 0], 'vae': ['41', 0]}
[ComfyUI] 【class_type】: VAEDecode
[ComfyUI] 【obj_class】: <class 'nodes.VAEDecode'>
[ComfyUI] 【class_inputs】: {'required': {'samples': ('LATENT', {'tooltip': 'The latent to be decoded.'}), 'vae': ('VAE', {'tooltip': 'The VAE model used for decoding the latent.'})}}
[ComfyUI] 【valid_inputs】: {'samples', 'vae'}
[ComfyUI] 【item】: 3
[ComfyUI] 【inputs】: {'seed': 3339211691, 'steps': 26, 'cfg': 7, 'sampler_name': 'euler', 'scheduler': 'exponential', 'denoise': 1, 'model': ['40', 0], 'positive': ['6', 0], 'negative': ['7', 0], 'latent_image': ['5', 0]}
[ComfyUI] 【class_type】: KSampler
[ComfyUI] 【obj_class】: <class 'nodes.KSampler'>
[ComfyUI] 【class_inputs】: {'required': {'model': ('MODEL', {'tooltip': 'The model used for denoising the input latent.'}), 'seed': ('INT', {'default': 0, 'min': 0, 'max': 18446744073709551615, 'control_after_generate': True, 'tooltip': 'The random seed used for creating the noise.'}), 'steps': ('INT', {'default': 20, 'min': 1, 'max': 10000, 'tooltip': 'The number of steps used in the denoising process.'}), 'cfg': ('FLOAT', {'default': 8.0, 'min': 0.0, 'max': 100.0, 'step': 0.1, 'round': 0.01, 'tooltip': 'The Classifier-Free Guidance scale balances creativity and adherence to the prompt. Higher values result in images more closely matching the prompt however too high values will negatively impact quality.'}), 'sampler_name': (['euler', 'euler_cfg_pp', 'euler_ancestral', 'euler_ancestral_cfg_pp', 'heun', 'heunpp2', 'dpm_2', 'dpm_2_ancestral', 'lms', 'dpm_fast', 'dpm_adaptive', 'dpmpp_2s_ancestral', 'dpmpp_2s_ancestral_cfg_pp', 'dpmpp_sde', 'dpmpp_sde_gpu', 'dpmpp_2m', 'dpmpp_2m_cfg_pp', 'dpmpp_2m_sde', 'dpmpp_2m_sde_gpu', 'dpmpp_3m_sde', 'dpmpp_3m_sde_gpu', 'ddpm', 'lcm', 'ipndm', 'ipndm_v', 'deis', 'res_multistep', 'res_multistep_cfg_pp', 'res_multistep_ancestral', 'res_multistep_ancestral_cfg_pp', 'gradient_estimation', 'er_sde', 'ddim', 'uni_pc', 'uni_pc_bh2'], {'tooltip': 'The algorithm used when sampling, this can affect the quality, speed, and style of the generated output.'}), 'scheduler': (['normal', 'karras', 'exponential', 'sgm_uniform', 'simple', 'ddim_uniform', 'beta', 'linear_quadratic', 'kl_optimal'], {'tooltip': 'The scheduler controls how noise is gradually removed to form the image.'}), 'positive': ('CONDITIONING', {'tooltip': 'The conditioning describing the attributes you want to include in the image.'}), 'negative': ('CONDITIONING', {'tooltip': 'The conditioning describing the attributes you want to exclude from the image.'}), 'latent_image': ('LATENT', {'tooltip': 'The latent image to denoise.'}), 'denoise': ('FLOAT', {'default': 1.0, 'min': 0.0, 'max': 1.0, 'step': 0.01, 'tooltip': 'The amount of denoising applied, lower values will maintain the structure of the initial image allowing for image to image sampling.'})}}
[ComfyUI] 【valid_inputs】: {'model', 'seed', 'cfg', 'scheduler', 'latent_image', 'steps', 'negative', 'denoise', 'positive', 'sampler_name'}
[ComfyUI] 【item】: 40
[ComfyUI] 【inputs】: {'ckpt_name': 'waiNSFWIllustrious_v90.safetensors'}
[ComfyUI] 【class_type】: CheckpointLoaderSimple
[ComfyUI] 【obj_class】: <class 'nodes.CheckpointLoaderSimple'>
[ComfyUI] 【class_inputs】: {'required': {'ckpt_name': (['waiNSFWIllustrious_v90.safetensors'], {'tooltip': 'The name of the checkpoint (model) to load.'})}}
[ComfyUI] 【valid_inputs】: {'ckpt_name'}
[ComfyUI] 【item】: 5
[ComfyUI] 【inputs】: {'width': 768, 'height': 1280, 'batch_size': 1}
[ComfyUI] 【class_type】: EmptyLatentImage
[ComfyUI] 【obj_class】: <class 'nodes.EmptyLatentImage'>
[ComfyUI] 【class_inputs】: {'required': {'width': ('INT', {'default': 512, 'min': 16, 'max': 16384, 'step': 8, 'tooltip': 'The width of the latent images in pixels.'}), 'height': ('INT', {'default': 512, 'min': 16, 'max': 16384, 'step': 8, 'tooltip': 'The height of the latent images in pixels.'}), 'batch_size': ('INT', {'default': 1, 'min': 1, 'max': 4096, 'tooltip': 'The number of latent images in the batch.'})}}
[ComfyUI] 【valid_inputs】: {'height', 'batch_size', 'width'}
[ComfyUI] 【item】: 7
[ComfyUI] 【inputs】: {'text': 'bad quality,worst quality,worst detail,sketch,censor,', 'clip': ['40', 1]}
[ComfyUI] 【class_type】: CLIPTextEncode
[ComfyUI] 【obj_class】: <class 'nodes.CLIPTextEncode'>
[ComfyUI] 【class_inputs】: {'required': {'text': (<IO.STRING: 'STRING'>, {'multiline': True, 'dynamicPrompts': True, 'tooltip': 'The text to be encoded.'}), 'clip': (<IO.CLIP: 'CLIP'>, {'tooltip': 'The CLIP model used for encoding the text.'})}}
[ComfyUI] 【valid_inputs】: {'clip', 'text'}
[ComfyUI] 【item】: 40
[ComfyUI] 【item】: 6
[ComfyUI] 【inputs】: {'text': 'glowing eyes, streaked hair, glowing inner hair, glowing streaks, 1girl, jinhsi \\(wuthering_waves\\), white hair,blonde_eyes, large_breasts, (upper_body,close-up:1.4),Dynamic pose,串,((aqua and yellow:0.85) but Limited palette:1.2),//,Chinese architecture,Ink,hair flowers,bamboo forest,bamboo,//(long hair),small breasts,smile,((dragon)),Yellow lightning,(makeup,white eyeliner),white eyeshadow,white eyes,(long eyelashes),half-closed eyes,Dragon skirt,blush,holding sword,chinese sword,(dynamic angle:1.2),(hanfu:1.4),chinese clothes,transparent clothes,tassel,chinese knot,bare shoulders,kanzashi,draped silk,gold trim,wind,bokeh,scattered leaves,flying splashes,waterfall,splashed water,looking at viewer, \n,masterpiece,best quality,amazing quality,', 'clip': ['42', 0]}
[ComfyUI] 【class_type】: CLIPTextEncode
[ComfyUI] 【obj_class】: <class 'nodes.CLIPTextEncode'>
[ComfyUI] 【class_inputs】: {'required': {'text': (<IO.STRING: 'STRING'>, {'multiline': True, 'dynamicPrompts': True, 'tooltip': 'The text to be encoded.'}), 'clip': (<IO.CLIP: 'CLIP'>, {'tooltip': 'The CLIP model used for encoding the text.'})}}
[ComfyUI] 【valid_inputs】: {'clip', 'text'}
[ComfyUI] 【item】: 42
[ComfyUI] 【inputs】: {'stop_at_clip_layer': -2, 'clip': ['40', 1]}
[ComfyUI] 【class_type】: CLIPSetLastLayer
[ComfyUI] 【obj_class】: <class 'nodes.CLIPSetLastLayer'>
[ComfyUI] 【class_inputs】: {'required': {'clip': ('CLIP',), 'stop_at_clip_layer': ('INT', {'default': -1, 'min': -24, 'max': -1, 'step': 1})}}
[ComfyUI] 【valid_inputs】: {'clip', 'stop_at_clip_layer'}
[ComfyUI] 【item】: 40
[ComfyUI] 【item】: 41
[ComfyUI] 【inputs】: {'vae_name': 'sdxl_vae.safetensors'}
[ComfyUI] 【class_type】: VAELoader
[ComfyUI] 【obj_class】: <class 'nodes.VAELoader'>
[ComfyUI] 【class_inputs】: {'required': {'vae_name': (['sdxl_vae.safetensors'],)}}
[ComfyUI] 【valid_inputs】: {'vae_name'}
[ComfyUI] 0%| | 0/26 [00:00<?, ?it/s]
[ComfyUI] 4%|▍ | 1/26 [00:00<00:07, 3.36it/s]
[ComfyUI] 12%|█▏ | 3/26 [00:00<00:03, 7.28it/s]
[ComfyUI] 15%|█▌ | 4/26 [00:00<00:02, 7.86it/s]
[ComfyUI] 19%|█▉ | 5/26 [00:00<00:02, 8.27it/s]
[ComfyUI] 23%|██▎ | 6/26 [00:00<00:02, 8.56it/s]
[ComfyUI] 27%|██▋ | 7/26 [00:00<00:02, 8.76it/s]
[ComfyUI] 31%|███ | 8/26 [00:01<00:02, 8.90it/s]
[ComfyUI] 35%|███▍ | 9/26 [00:01<00:01, 9.00it/s]
[ComfyUI] 38%|███▊ | 10/26 [00:01<00:01, 9.06it/s]
[ComfyUI] 42%|████▏ | 11/26 [00:01<00:01, 9.11it/s]
[ComfyUI] 46%|████▌ | 12/26 [00:01<00:01, 9.10it/s]
[ComfyUI] 50%|█████ | 13/26 [00:01<00:01, 9.13it/s]
[ComfyUI] 54%|█████▍ | 14/26 [00:01<00:01, 9.15it/s]
[ComfyUI] 58%|█████▊ | 15/26 [00:01<00:01, 9.15it/s]
[ComfyUI] 62%|██████▏ | 16/26 [00:01<00:01, 9.15it/s]
[ComfyUI] 65%|██████▌ | 17/26 [00:01<00:00, 9.18it/s]
[ComfyUI] 69%|██████▉ | 18/26 [00:02<00:00, 9.17it/s]
[ComfyUI] 73%|███████▎ | 19/26 [00:02<00:00, 9.18it/s]
[ComfyUI] 77%|███████▋ | 20/26 [00:02<00:00, 9.19it/s]
[ComfyUI] 81%|████████ | 21/26 [00:02<00:00, 9.21it/s]
[ComfyUI] 85%|████████▍ | 22/26 [00:02<00:00, 9.21it/s]
[ComfyUI] 88%|████████▊ | 23/26 [00:02<00:00, 9.19it/s]
[ComfyUI] 92%|█████████▏| 24/26 [00:02<00:00, 9.18it/s]
[ComfyUI] 96%|█████████▌| 25/26 [00:02<00:00, 9.19it/s]
[ComfyUI] 100%|██████████| 26/26 [00:02<00:00, 9.19it/s]
[ComfyUI] 100%|██████████| 26/26 [00:02<00:00, 8.78it/s]
[ComfyUI] Requested to load AutoencoderKL
Executing node 8, title: VAE解码, class type: VAEDecode
[ComfyUI] loaded completely 34094.475997924805 159.55708122253418 True
Executing node 9, title: 保存图像, class type: SaveImage
[ComfyUI] Prompt executed in 5.47 seconds
outputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}
====================================
ComfyUI_00001_.png