Readme
SAM2 Infill Anything
No manual masking required. By using SAM2 a mask is automatically created, and then we can inpaint any object you wish.
Inpaint anything with automatic mask generation
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run aaronhayes/sam2-infill-anything using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"aaronhayes/sam2-infill-anything:622920c3362cfc010e32ff85f98b2f7c4bc36b498e540386b9a52722c3e32f6d",
{
input: {
cfg: 8,
image: "https://replicate.delivery/pbxt/MSDtQ6SQcoBe7skJ2iINnCISioSfSAoe7OyrVaUzkuI47a5Q/image.png",
steps: 20,
denoise: 0.9,
mask_prompt: "rabbit",
infill_prompt: "A small cute baby grizzly bear",
output_format: "jpg",
mask_threshold: 0.5,
output_quality: 95,
infill_negative_prompt: "deformed, distorted, blurry, bad light, extra buildings, extra structures, buildings, overexposed, oversaturated, fake, animated, cartoon"
}
}
);
// To access the file URL:
console.log(output[0].url()); //=> "http://example.com"
// To write the file to disk:
fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run aaronhayes/sam2-infill-anything using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"aaronhayes/sam2-infill-anything:622920c3362cfc010e32ff85f98b2f7c4bc36b498e540386b9a52722c3e32f6d",
input={
"cfg": 8,
"image": "https://replicate.delivery/pbxt/MSDtQ6SQcoBe7skJ2iINnCISioSfSAoe7OyrVaUzkuI47a5Q/image.png",
"steps": 20,
"denoise": 0.9,
"mask_prompt": "rabbit",
"infill_prompt": "A small cute baby grizzly bear",
"output_format": "jpg",
"mask_threshold": 0.5,
"output_quality": 95,
"infill_negative_prompt": "deformed, distorted, blurry, bad light, extra buildings, extra structures, buildings, overexposed, oversaturated, fake, animated, cartoon"
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run aaronhayes/sam2-infill-anything using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "aaronhayes/sam2-infill-anything:622920c3362cfc010e32ff85f98b2f7c4bc36b498e540386b9a52722c3e32f6d",
"input": {
"cfg": 8,
"image": "https://replicate.delivery/pbxt/MSDtQ6SQcoBe7skJ2iINnCISioSfSAoe7OyrVaUzkuI47a5Q/image.png",
"steps": 20,
"denoise": 0.9,
"mask_prompt": "rabbit",
"infill_prompt": "A small cute baby grizzly bear",
"output_format": "jpg",
"mask_threshold": 0.5,
"output_quality": 95,
"infill_negative_prompt": "deformed, distorted, blurry, bad light, extra buildings, extra structures, buildings, overexposed, oversaturated, fake, animated, cartoon"
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/aaronhayes/sam2-infill-anything@sha256:622920c3362cfc010e32ff85f98b2f7c4bc36b498e540386b9a52722c3e32f6d \
-i 'cfg=8' \
-i 'image="https://replicate.delivery/pbxt/MSDtQ6SQcoBe7skJ2iINnCISioSfSAoe7OyrVaUzkuI47a5Q/image.png"' \
-i 'steps=20' \
-i 'denoise=0.9' \
-i 'mask_prompt="rabbit"' \
-i 'infill_prompt="A small cute baby grizzly bear"' \
-i 'output_format="jpg"' \
-i 'mask_threshold=0.5' \
-i 'output_quality=95' \
-i 'infill_negative_prompt="deformed, distorted, blurry, bad light, extra buildings, extra structures, buildings, overexposed, oversaturated, fake, animated, cartoon"'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/aaronhayes/sam2-infill-anything@sha256:622920c3362cfc010e32ff85f98b2f7c4bc36b498e540386b9a52722c3e32f6d
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "cfg": 8, "image": "https://replicate.delivery/pbxt/MSDtQ6SQcoBe7skJ2iINnCISioSfSAoe7OyrVaUzkuI47a5Q/image.png", "steps": 20, "denoise": 0.9, "mask_prompt": "rabbit", "infill_prompt": "A small cute baby grizzly bear", "output_format": "jpg", "mask_threshold": 0.5, "output_quality": 95, "infill_negative_prompt": "deformed, distorted, blurry, bad light, extra buildings, extra structures, buildings, overexposed, oversaturated, fake, animated, cartoon" } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Add a payment method to run this model.
Each run costs approximately $0.13. Alternatively, try out our featured models for free.
By signing in, you agree to our
terms of service and privacy policy
{
"completed_at": "2025-02-06T22:40:32.422521Z",
"created_at": "2025-02-06T22:39:50.329000Z",
"data_removed": false,
"error": null,
"id": "r960svk475rj20cmvnntesta3r",
"input": {
"cfg": 8,
"image": "https://replicate.delivery/pbxt/MSDtQ6SQcoBe7skJ2iINnCISioSfSAoe7OyrVaUzkuI47a5Q/image.png",
"steps": 20,
"denoise": 0.9,
"mask_prompt": "rabbit",
"infill_prompt": "A small cute baby grizzly bear",
"output_format": "jpg",
"mask_threshold": 0.5,
"output_quality": 95,
"infill_negative_prompt": "deformed, distorted, blurry, bad light, extra buildings, extra structures, buildings, overexposed, oversaturated, fake, animated, cartoon"
},
"logs": "Random seed set to: 849027535\nChecking inputs\n✅ /tmp/inputs/image.png\n====================================\nChecking weights\nChecking if juggernautXLInpainting_xiInpainting.safetensors exists in ComfyUI/models/checkpoints\n✅ juggernautXLInpainting_xiInpainting.safetensors exists in ComfyUI/models/checkpoints\nSkipping sam2_1_hiera_base_plus.pt as weights are bundled in cog\nChecking if 4x-UltraSharp.pth exists in ComfyUI/models/upscale_models\n✅ 4x-UltraSharp.pth exists in ComfyUI/models/upscale_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 21, title: Load Upscale Model, class type: UpscaleModelLoader\nExecuting node 13, title: Load Checkpoint, class type: CheckpointLoaderSimple\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type EPS\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16\n[ComfyUI] CLIP model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16\nExecuting node 1, title: Load Image, class type: LoadImage\nExecuting node 2, title: Image Scale Down To Size, class type: easy imageScaleDownToSize\nExecuting node 3, title: 🔧 Get Image Size, class type: GetImageSize+\nExecuting node 4, title: Resize Image, class type: ImageResizeKJ\nExecuting node 10, title: GroundingDinoModelLoader (segment anything2), class type: GroundingDinoModelLoader (segment anything2)\nExecuting node 9, title: SAM2ModelLoader (segment anything2), class type: SAM2ModelLoader (segment anything2)\n[ComfyUI] /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/functional.py:534: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3595.)\n[ComfyUI] return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]\n[ComfyUI] Loaded checkpoint sucessfully\nExecuting node 11, title: GroundingDinoSAM2Segment (segment anything2), class type: GroundingDinoSAM2Segment (segment anything2)\n[ComfyUI] /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:632: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.\n[ComfyUI] return fn(*args, **kwargs)\n[ComfyUI] /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None\n[ComfyUI] warnings.warn(\n[ComfyUI] For numpy array image, we assume (HxWxC) format\n[ComfyUI] Computing image embeddings for the provided image...\n[ComfyUI] Image embeddings computed.\nExecuting node 12, title: GrowMask, class type: GrowMask\nExecuting node 16, title: Negative Prompt, class type: CLIPTextEncode\n[ComfyUI] Requested to load SDXLClipModel\n[ComfyUI] loaded completely 9.5367431640625e+25 1560.802734375 True\nExecuting node 15, title: Prompt, class type: CLIPTextEncode\nExecuting node 17, title: InpaintModelConditioning, class type: InpaintModelConditioning\n[ComfyUI] Requested to load AutoencoderKL\n[ComfyUI] loaded completely 9.5367431640625e+25 159.55708122253418 True\nExecuting node 14, title: Differential Diffusion, class type: DifferentialDiffusion\nExecuting node 18, title: KSampler, class type: KSampler\n[ComfyUI] Requested to load SDXL\n[ComfyUI] loaded completely 9.5367431640625e+25 4897.075813293457 True\n[ComfyUI]\n[ComfyUI] \u001b[34m[ComfyUI-Easy-Use] server: \u001b[0mv1.2.7 \u001b[92mLoaded\u001b[0m\n[ComfyUI] \u001b[34m[ComfyUI-Easy-Use] web root: \u001b[0m/src/ComfyUI/custom_nodes/ComfyUI-Easy-Use/web_version/v2 \u001b[92mLoaded\u001b[0m\n[ComfyUI] grounding-dino is using models/bert-base-uncased\n[ComfyUI] final text_encoder_type: /src/ComfyUI/models/bert-base-uncased\n[ComfyUI] scores: [[0.9872246]]\n[ComfyUI] 0%| | 0/20 [00:00<?, ?it/s]\n[ComfyUI] 5%|▌ | 1/20 [00:00<00:02, 6.37it/s]\n[ComfyUI] 15%|█▌ | 3/20 [00:00<00:01, 12.30it/s]\n[ComfyUI] 25%|██▌ | 5/20 [00:00<00:01, 14.28it/s]\n[ComfyUI] 35%|███▌ | 7/20 [00:00<00:00, 15.32it/s]\n[ComfyUI] 45%|████▌ | 9/20 [00:00<00:00, 15.90it/s]\n[ComfyUI] 55%|█████▌ | 11/20 [00:00<00:00, 16.21it/s]\n[ComfyUI] 65%|██████▌ | 13/20 [00:00<00:00, 16.44it/s]\n[ComfyUI] 75%|███████▌ | 15/20 [00:00<00:00, 16.64it/s]\n[ComfyUI] 85%|████████▌ | 17/20 [00:01<00:00, 16.73it/s]\n[ComfyUI] 95%|█████████▌| 19/20 [00:01<00:00, 16.69it/s]\nExecuting node 19, title: VAE Decode, class type: VAEDecode\nExecuting node 20, title: Upscale Image (using Model), class type: ImageUpscaleWithModel\nExecuting node 23, title: Save Image, class type: SaveImage\n[ComfyUI] 100%|██████████| 20/20 [00:01<00:00, 15.72it/s]\n[ComfyUI] Prompt executed in 9.83 seconds\noutputs: {'23': {'images': [{'filename': 'output_00001_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\noutput_00001_.png",
"metrics": {
"predict_time": 10.608324635,
"total_time": 42.093521
},
"output": [
"https://replicate.delivery/yhqm/AukIczZSuf26S61mq9DseDjhQAiq3ns6bwawaJ8q22UgxuMUA/output_00001_.jpg"
],
"started_at": "2025-02-06T22:40:21.814196Z",
"status": "succeeded",
"urls": {
"stream": "https://stream.replicate.com/v1/files/yswh-fw5fdhqjrpou4ybcvggnv4zhsw6tdedcbt3ixeu76r3rxtzt7nva",
"get": "https://api.replicate.com/v1/predictions/r960svk475rj20cmvnntesta3r",
"cancel": "https://api.replicate.com/v1/predictions/r960svk475rj20cmvnntesta3r/cancel"
},
"version": "622920c3362cfc010e32ff85f98b2f7c4bc36b498e540386b9a52722c3e32f6d"
}
Random seed set to: 849027535
Checking inputs
✅ /tmp/inputs/image.png
====================================
Checking weights
Checking if juggernautXLInpainting_xiInpainting.safetensors exists in ComfyUI/models/checkpoints
✅ juggernautXLInpainting_xiInpainting.safetensors exists in ComfyUI/models/checkpoints
Skipping sam2_1_hiera_base_plus.pt as weights are bundled in cog
Checking if 4x-UltraSharp.pth exists in ComfyUI/models/upscale_models
✅ 4x-UltraSharp.pth exists in ComfyUI/models/upscale_models
====================================
Running workflow
[ComfyUI] got prompt
Executing node 21, title: Load Upscale Model, class type: UpscaleModelLoader
Executing node 13, title: Load Checkpoint, class type: CheckpointLoaderSimple
[ComfyUI] model weight dtype torch.float16, manual cast: None
[ComfyUI] model_type EPS
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[ComfyUI] CLIP model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Executing node 1, title: Load Image, class type: LoadImage
Executing node 2, title: Image Scale Down To Size, class type: easy imageScaleDownToSize
Executing node 3, title: 🔧 Get Image Size, class type: GetImageSize+
Executing node 4, title: Resize Image, class type: ImageResizeKJ
Executing node 10, title: GroundingDinoModelLoader (segment anything2), class type: GroundingDinoModelLoader (segment anything2)
Executing node 9, title: SAM2ModelLoader (segment anything2), class type: SAM2ModelLoader (segment anything2)
[ComfyUI] /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/functional.py:534: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3595.)
[ComfyUI] return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
[ComfyUI] Loaded checkpoint sucessfully
Executing node 11, title: GroundingDinoSAM2Segment (segment anything2), class type: GroundingDinoSAM2Segment (segment anything2)
[ComfyUI] /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:632: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
[ComfyUI] return fn(*args, **kwargs)
[ComfyUI] /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
[ComfyUI] warnings.warn(
[ComfyUI] For numpy array image, we assume (HxWxC) format
[ComfyUI] Computing image embeddings for the provided image...
[ComfyUI] Image embeddings computed.
Executing node 12, title: GrowMask, class type: GrowMask
Executing node 16, title: Negative Prompt, class type: CLIPTextEncode
[ComfyUI] Requested to load SDXLClipModel
[ComfyUI] loaded completely 9.5367431640625e+25 1560.802734375 True
Executing node 15, title: Prompt, class type: CLIPTextEncode
Executing node 17, title: InpaintModelConditioning, class type: InpaintModelConditioning
[ComfyUI] Requested to load AutoencoderKL
[ComfyUI] loaded completely 9.5367431640625e+25 159.55708122253418 True
Executing node 14, title: Differential Diffusion, class type: DifferentialDiffusion
Executing node 18, title: KSampler, class type: KSampler
[ComfyUI] Requested to load SDXL
[ComfyUI] loaded completely 9.5367431640625e+25 4897.075813293457 True
[ComfyUI]
[ComfyUI] [ComfyUI-Easy-Use] server: v1.2.7 Loaded
[ComfyUI] [ComfyUI-Easy-Use] web root: /src/ComfyUI/custom_nodes/ComfyUI-Easy-Use/web_version/v2 Loaded
[ComfyUI] grounding-dino is using models/bert-base-uncased
[ComfyUI] final text_encoder_type: /src/ComfyUI/models/bert-base-uncased
[ComfyUI] scores: [[0.9872246]]
[ComfyUI] 0%| | 0/20 [00:00<?, ?it/s]
[ComfyUI] 5%|▌ | 1/20 [00:00<00:02, 6.37it/s]
[ComfyUI] 15%|█▌ | 3/20 [00:00<00:01, 12.30it/s]
[ComfyUI] 25%|██▌ | 5/20 [00:00<00:01, 14.28it/s]
[ComfyUI] 35%|███▌ | 7/20 [00:00<00:00, 15.32it/s]
[ComfyUI] 45%|████▌ | 9/20 [00:00<00:00, 15.90it/s]
[ComfyUI] 55%|█████▌ | 11/20 [00:00<00:00, 16.21it/s]
[ComfyUI] 65%|██████▌ | 13/20 [00:00<00:00, 16.44it/s]
[ComfyUI] 75%|███████▌ | 15/20 [00:00<00:00, 16.64it/s]
[ComfyUI] 85%|████████▌ | 17/20 [00:01<00:00, 16.73it/s]
[ComfyUI] 95%|█████████▌| 19/20 [00:01<00:00, 16.69it/s]
Executing node 19, title: VAE Decode, class type: VAEDecode
Executing node 20, title: Upscale Image (using Model), class type: ImageUpscaleWithModel
Executing node 23, title: Save Image, class type: SaveImage
[ComfyUI] 100%|██████████| 20/20 [00:01<00:00, 15.72it/s]
[ComfyUI] Prompt executed in 9.83 seconds
outputs: {'23': {'images': [{'filename': 'output_00001_.png', 'subfolder': '', 'type': 'output'}]}}
====================================
output_00001_.png
This model costs approximately $0.13 to run on Replicate, or 7 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.
This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 93 seconds. The predict time for this model varies significantly based on the inputs.
No manual masking required. By using SAM2 a mask is automatically created, and then we can inpaint any object you wish.
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
Choose a file from your machine
Hint: you can also drag files onto the input
Random seed set to: 849027535
Checking inputs
✅ /tmp/inputs/image.png
====================================
Checking weights
Checking if juggernautXLInpainting_xiInpainting.safetensors exists in ComfyUI/models/checkpoints
✅ juggernautXLInpainting_xiInpainting.safetensors exists in ComfyUI/models/checkpoints
Skipping sam2_1_hiera_base_plus.pt as weights are bundled in cog
Checking if 4x-UltraSharp.pth exists in ComfyUI/models/upscale_models
✅ 4x-UltraSharp.pth exists in ComfyUI/models/upscale_models
====================================
Running workflow
[ComfyUI] got prompt
Executing node 21, title: Load Upscale Model, class type: UpscaleModelLoader
Executing node 13, title: Load Checkpoint, class type: CheckpointLoaderSimple
[ComfyUI] model weight dtype torch.float16, manual cast: None
[ComfyUI] model_type EPS
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[ComfyUI] CLIP model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Executing node 1, title: Load Image, class type: LoadImage
Executing node 2, title: Image Scale Down To Size, class type: easy imageScaleDownToSize
Executing node 3, title: 🔧 Get Image Size, class type: GetImageSize+
Executing node 4, title: Resize Image, class type: ImageResizeKJ
Executing node 10, title: GroundingDinoModelLoader (segment anything2), class type: GroundingDinoModelLoader (segment anything2)
Executing node 9, title: SAM2ModelLoader (segment anything2), class type: SAM2ModelLoader (segment anything2)
[ComfyUI] /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/functional.py:534: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3595.)
[ComfyUI] return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
[ComfyUI] Loaded checkpoint sucessfully
Executing node 11, title: GroundingDinoSAM2Segment (segment anything2), class type: GroundingDinoSAM2Segment (segment anything2)
[ComfyUI] /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:632: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
[ComfyUI] return fn(*args, **kwargs)
[ComfyUI] /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
[ComfyUI] warnings.warn(
[ComfyUI] For numpy array image, we assume (HxWxC) format
[ComfyUI] Computing image embeddings for the provided image...
[ComfyUI] Image embeddings computed.
Executing node 12, title: GrowMask, class type: GrowMask
Executing node 16, title: Negative Prompt, class type: CLIPTextEncode
[ComfyUI] Requested to load SDXLClipModel
[ComfyUI] loaded completely 9.5367431640625e+25 1560.802734375 True
Executing node 15, title: Prompt, class type: CLIPTextEncode
Executing node 17, title: InpaintModelConditioning, class type: InpaintModelConditioning
[ComfyUI] Requested to load AutoencoderKL
[ComfyUI] loaded completely 9.5367431640625e+25 159.55708122253418 True
Executing node 14, title: Differential Diffusion, class type: DifferentialDiffusion
Executing node 18, title: KSampler, class type: KSampler
[ComfyUI] Requested to load SDXL
[ComfyUI] loaded completely 9.5367431640625e+25 4897.075813293457 True
[ComfyUI]
[ComfyUI] [ComfyUI-Easy-Use] server: v1.2.7 Loaded
[ComfyUI] [ComfyUI-Easy-Use] web root: /src/ComfyUI/custom_nodes/ComfyUI-Easy-Use/web_version/v2 Loaded
[ComfyUI] grounding-dino is using models/bert-base-uncased
[ComfyUI] final text_encoder_type: /src/ComfyUI/models/bert-base-uncased
[ComfyUI] scores: [[0.9872246]]
[ComfyUI] 0%| | 0/20 [00:00<?, ?it/s]
[ComfyUI] 5%|▌ | 1/20 [00:00<00:02, 6.37it/s]
[ComfyUI] 15%|█▌ | 3/20 [00:00<00:01, 12.30it/s]
[ComfyUI] 25%|██▌ | 5/20 [00:00<00:01, 14.28it/s]
[ComfyUI] 35%|███▌ | 7/20 [00:00<00:00, 15.32it/s]
[ComfyUI] 45%|████▌ | 9/20 [00:00<00:00, 15.90it/s]
[ComfyUI] 55%|█████▌ | 11/20 [00:00<00:00, 16.21it/s]
[ComfyUI] 65%|██████▌ | 13/20 [00:00<00:00, 16.44it/s]
[ComfyUI] 75%|███████▌ | 15/20 [00:00<00:00, 16.64it/s]
[ComfyUI] 85%|████████▌ | 17/20 [00:01<00:00, 16.73it/s]
[ComfyUI] 95%|█████████▌| 19/20 [00:01<00:00, 16.69it/s]
Executing node 19, title: VAE Decode, class type: VAEDecode
Executing node 20, title: Upscale Image (using Model), class type: ImageUpscaleWithModel
Executing node 23, title: Save Image, class type: SaveImage
[ComfyUI] 100%|██████████| 20/20 [00:01<00:00, 15.72it/s]
[ComfyUI] Prompt executed in 9.83 seconds
outputs: {'23': {'images': [{'filename': 'output_00001_.png', 'subfolder': '', 'type': 'output'}]}}
====================================
output_00001_.png