jschoormans
/
comfyui-interior-remodel
Interior remodelling, keeps windows, ceilings, and doors. Uses a depth controlnet weighted to ignore existing furniture.
Prediction
jschoormans/comfyui-interior-remodel:4c0cf5c14b3a375db28df1c15c75f6f6fcd3597a78f5ec6d89a400568cc17b7aIDfyeed0c14srgj0cfqx6s9dshzcStatusSucceededSourceWebHardwareA100 (40GB)Total durationCreatedInput
- prompt
- photo of a beautiful living room, modern design, modernist, cozy high resolution, highly detailed, 4k
- output_format
- webp
- output_quality
- 80
- negative_prompt
- blurry, illustration, distorted, horror
- randomise_seeds
- return_temp_files
{ "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": true, "return_temp_files": false }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run jschoormans/comfyui-interior-remodel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "jschoormans/comfyui-interior-remodel:4c0cf5c14b3a375db28df1c15c75f6f6fcd3597a78f5ec6d89a400568cc17b7a", { input: { image: "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", prompt: "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", output_format: "webp", output_quality: 80, negative_prompt: "blurry, illustration, distorted, horror", randomise_seeds: true, return_temp_files: false } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run jschoormans/comfyui-interior-remodel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "jschoormans/comfyui-interior-remodel:4c0cf5c14b3a375db28df1c15c75f6f6fcd3597a78f5ec6d89a400568cc17b7a", input={ "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": True, "return_temp_files": False } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run jschoormans/comfyui-interior-remodel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "jschoormans/comfyui-interior-remodel:4c0cf5c14b3a375db28df1c15c75f6f6fcd3597a78f5ec6d89a400568cc17b7a", "input": { "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": true, "return_temp_files": false } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-05-28T13:07:10.288828Z", "created_at": "2024-05-28T13:04:38.694000Z", "data_removed": false, "error": null, "id": "fyeed0c14srgj0cfqx6s9dshzc", "input": { "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": true, "return_temp_files": false }, "logs": "Checking inputs\n✅ /tmp/inputs/input.jpg\n====================================\nChecking weights\n✅ realvisxlV40_v30InpaintBakedvae.safetensors\n✅ ZoeD_M12_N.pt\n⏳ Downloading depth-zoe-xl-v1.0-controlnet.safetensors to ComfyUI/models/controlnet\n⌛️ Downloaded depth-zoe-xl-v1.0-controlnet.safetensors in 2.92s, size: 4772.35MB\n✅ depth-zoe-xl-v1.0-controlnet.safetensors\n====================================\nRandomising seed to 585044612\nRunning workflow\ngot prompt\nExecuting node 12, title: Load Image, class type: LoadImage\nExecuting node 32, title: 🔧 Image Resize, class type: ImageResize+\nExecuting node 21, title: Zoe Depth Map, class type: Zoe-DepthMapPreprocessor\nmodel_path is /src/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts/lllyasviel/Annotators/ZoeD_M12_N.pt\nExecuting node 17, title: Preview Image, class type: PreviewImage\nExecuting node 26, title: CLIPSeg, class type: CLIPSeg\npreprocessor_config.json: 0%| | 0.00/380 [00:00<?, ?B/s]\npreprocessor_config.json: 100%|██████████| 380/380 [00:00<00:00, 2.73MB/s]\ntokenizer_config.json: 0%| | 0.00/974 [00:00<?, ?B/s]\ntokenizer_config.json: 100%|██████████| 974/974 [00:00<00:00, 9.82MB/s]\nvocab.json: 0%| | 0.00/1.06M [00:00<?, ?B/s]\nvocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 39.8MB/s]\nmerges.txt: 0%| | 0.00/525k [00:00<?, ?B/s]\nmerges.txt: 100%|██████████| 525k/525k [00:00<00:00, 55.0MB/s]\nspecial_tokens_map.json: 0%| | 0.00/472 [00:00<?, ?B/s]\nspecial_tokens_map.json: 100%|██████████| 472/472 [00:00<00:00, 4.91MB/s]\nconfig.json: 0%| | 0.00/4.73k [00:00<?, ?B/s]\nconfig.json: 100%|██████████| 4.73k/4.73k [00:00<00:00, 25.9MB/s]\npytorch_model.bin: 0%| | 0.00/603M [00:00<?, ?B/s]\npytorch_model.bin: 5%|▌ | 31.5M/603M [00:00<00:02, 231MB/s]\npytorch_model.bin: 10%|█ | 62.9M/603M [00:00<00:01, 270MB/s]\npytorch_model.bin: 21%|██ | 126M/603M [00:00<00:01, 391MB/s] \npytorch_model.bin: 30%|██▉ | 178M/603M [00:00<00:00, 434MB/s]\npytorch_model.bin: 38%|███▊ | 231M/603M [00:00<00:00, 418MB/s]\npytorch_model.bin: 47%|████▋ | 283M/603M [00:00<00:00, 420MB/s]\npytorch_model.bin: 56%|█████▌ | 336M/603M [00:00<00:00, 422MB/s]\npytorch_model.bin: 64%|██████▍ | 388M/603M [00:00<00:00, 433MB/s]\npytorch_model.bin: 75%|███████▍ | 451M/603M [00:01<00:00, 460MB/s]\npytorch_model.bin: 83%|████████▎ | 503M/603M [00:01<00:00, 449MB/s]\npytorch_model.bin: 92%|█████████▏| 556M/603M [00:01<00:00, 435MB/s]\npytorch_model.bin: 100%|██████████| 603M/603M [00:01<00:00, 428MB/s]\npytorch_model.bin: 100%|██████████| 603M/603M [00:01<00:00, 417MB/s]\nExecuting node 27, title: Preview Image, class type: PreviewImage\nExecuting node 30, title: CLIPSeg, class type: CLIPSeg\nExecuting node 31, title: Preview Image, class type: PreviewImage\nExecuting node 35, title: CLIPSeg, class type: CLIPSeg\nExecuting node 36, title: Preview Image, class type: PreviewImage\nExecuting node 45, title: CombineSegMasks, class type: CombineSegMasks\nExecuting node 44, title: Preview Image, class type: PreviewImage\nExecuting node 56, title: CLIPSeg, class type: CLIPSeg\nExecuting node 64, title: Convert Mask to Image, class type: MaskToImage\nExecuting node 57, title: Preview Image, class type: PreviewImage\nExecuting node 4, title: Load Checkpoint, class type: CheckpointLoaderSimple\nmodel_type EPS\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nloaded straight to GPU\nRequested to load SDXL\nLoading 1 new model\nExecuting node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nRequested to load SDXLClipModel\nLoading 1 new model\nExecuting node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nExecuting node 34, title: InvertMask, class type: InvertMask\nExecuting node 33, title: InpaintModelConditioning, class type: InpaintModelConditioning\nRequested to load AutoencoderKL\nLoading 1 new model\nExecuting node 61, title: Load Advanced ControlNet Model 🛂🅐🅒🅝, class type: ControlNetLoaderAdvanced\nExecuting node 62, title: InvertMask, class type: InvertMask\nExecuting node 60, title: Apply Advanced ControlNet 🛂🅐🅒🅝, class type: ACN_AdvancedControlNetApply\nExecuting node 3, title: KSampler, class type: KSampler\nRequested to load ControlNet\nLoading 1 new model\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:11, 1.72it/s]\n 10%|█ | 2/20 [00:00<00:07, 2.44it/s]\n 15%|█▌ | 3/20 [00:01<00:06, 2.82it/s]\n 20%|██ | 4/20 [00:01<00:05, 3.05it/s]\n 25%|██▌ | 5/20 [00:01<00:04, 3.17it/s]\n 30%|███ | 6/20 [00:02<00:04, 3.26it/s]\n 35%|███▌ | 7/20 [00:02<00:03, 3.32it/s]\n 40%|████ | 8/20 [00:02<00:03, 3.37it/s]\n 45%|████▌ | 9/20 [00:02<00:02, 3.76it/s]\n 50%|█████ | 10/20 [00:03<00:02, 4.06it/s]\n 55%|█████▌ | 11/20 [00:03<00:02, 4.32it/s]\n 60%|██████ | 12/20 [00:03<00:01, 4.53it/s]\n 65%|██████▌ | 13/20 [00:03<00:01, 4.69it/s]\n 70%|███████ | 14/20 [00:03<00:01, 4.81it/s]\n 75%|███████▌ | 15/20 [00:03<00:01, 4.87it/s]\n 80%|████████ | 16/20 [00:04<00:00, 4.95it/s]\n 85%|████████▌ | 17/20 [00:04<00:00, 4.98it/s]\n 90%|█████████ | 18/20 [00:04<00:00, 5.02it/s]\n 95%|█████████▌| 19/20 [00:04<00:00, 5.08it/s]\n100%|██████████| 20/20 [00:04<00:00, 5.13it/s]\n100%|██████████| 20/20 [00:04<00:00, 4.03it/s]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 58, title: Save Image, class type: SaveImage\nPrompt executed in 23.41 seconds\noutputs: {'17': {'images': [{'filename': 'ComfyUI_temp_hnpkt_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '27': {'images': [{'filename': 'ComfyUI_temp_urdyl_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '31': {'images': [{'filename': 'ComfyUI_temp_aybqh_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '36': {'images': [{'filename': 'ComfyUI_temp_rrmos_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '44': {'images': [{'filename': 'ComfyUI_temp_xbtcp_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '57': {'images': [{'filename': 'ComfyUI_temp_pmjvp_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '58': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\nContents of /tmp/outputs:\nComfyUI_00001_.png", "metrics": { "predict_time": 27.851578, "total_time": 151.594828 }, "output": [ "https://replicate.delivery/pbxt/pT8vYB7xSKKAPBvaGfROaCJ9N7OdFPy82Sf7O9MHrAq9j44SA/ComfyUI_00001_.webp" ], "started_at": "2024-05-28T13:06:42.437250Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/fyeed0c14srgj0cfqx6s9dshzc", "cancel": "https://api.replicate.com/v1/predictions/fyeed0c14srgj0cfqx6s9dshzc/cancel" }, "version": "4c0cf5c14b3a375db28df1c15c75f6f6fcd3597a78f5ec6d89a400568cc17b7a" }
Generated inChecking inputs ✅ /tmp/inputs/input.jpg ==================================== Checking weights ✅ realvisxlV40_v30InpaintBakedvae.safetensors ✅ ZoeD_M12_N.pt ⏳ Downloading depth-zoe-xl-v1.0-controlnet.safetensors to ComfyUI/models/controlnet ⌛️ Downloaded depth-zoe-xl-v1.0-controlnet.safetensors in 2.92s, size: 4772.35MB ✅ depth-zoe-xl-v1.0-controlnet.safetensors ==================================== Randomising seed to 585044612 Running workflow got prompt Executing node 12, title: Load Image, class type: LoadImage Executing node 32, title: 🔧 Image Resize, class type: ImageResize+ Executing node 21, title: Zoe Depth Map, class type: Zoe-DepthMapPreprocessor model_path is /src/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts/lllyasviel/Annotators/ZoeD_M12_N.pt Executing node 17, title: Preview Image, class type: PreviewImage Executing node 26, title: CLIPSeg, class type: CLIPSeg preprocessor_config.json: 0%| | 0.00/380 [00:00<?, ?B/s] preprocessor_config.json: 100%|██████████| 380/380 [00:00<00:00, 2.73MB/s] tokenizer_config.json: 0%| | 0.00/974 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 974/974 [00:00<00:00, 9.82MB/s] vocab.json: 0%| | 0.00/1.06M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 39.8MB/s] merges.txt: 0%| | 0.00/525k [00:00<?, ?B/s] merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 55.0MB/s] special_tokens_map.json: 0%| | 0.00/472 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 472/472 [00:00<00:00, 4.91MB/s] config.json: 0%| | 0.00/4.73k [00:00<?, ?B/s] config.json: 100%|██████████| 4.73k/4.73k [00:00<00:00, 25.9MB/s] pytorch_model.bin: 0%| | 0.00/603M [00:00<?, ?B/s] pytorch_model.bin: 5%|▌ | 31.5M/603M [00:00<00:02, 231MB/s] pytorch_model.bin: 10%|█ | 62.9M/603M [00:00<00:01, 270MB/s] pytorch_model.bin: 21%|██ | 126M/603M [00:00<00:01, 391MB/s] pytorch_model.bin: 30%|██▉ | 178M/603M [00:00<00:00, 434MB/s] pytorch_model.bin: 38%|███▊ | 231M/603M [00:00<00:00, 418MB/s] pytorch_model.bin: 47%|████▋ | 283M/603M [00:00<00:00, 420MB/s] pytorch_model.bin: 56%|█████▌ | 336M/603M [00:00<00:00, 422MB/s] pytorch_model.bin: 64%|██████▍ | 388M/603M [00:00<00:00, 433MB/s] pytorch_model.bin: 75%|███████▍ | 451M/603M [00:01<00:00, 460MB/s] pytorch_model.bin: 83%|████████▎ | 503M/603M [00:01<00:00, 449MB/s] pytorch_model.bin: 92%|█████████▏| 556M/603M [00:01<00:00, 435MB/s] pytorch_model.bin: 100%|██████████| 603M/603M [00:01<00:00, 428MB/s] pytorch_model.bin: 100%|██████████| 603M/603M [00:01<00:00, 417MB/s] Executing node 27, title: Preview Image, class type: PreviewImage Executing node 30, title: CLIPSeg, class type: CLIPSeg Executing node 31, title: Preview Image, class type: PreviewImage Executing node 35, title: CLIPSeg, class type: CLIPSeg Executing node 36, title: Preview Image, class type: PreviewImage Executing node 45, title: CombineSegMasks, class type: CombineSegMasks Executing node 44, title: Preview Image, class type: PreviewImage Executing node 56, title: CLIPSeg, class type: CLIPSeg Executing node 64, title: Convert Mask to Image, class type: MaskToImage Executing node 57, title: Preview Image, class type: PreviewImage Executing node 4, title: Load Checkpoint, class type: CheckpointLoaderSimple model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE loaded straight to GPU Requested to load SDXL Loading 1 new model Executing node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Requested to load SDXLClipModel Loading 1 new model Executing node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Executing node 34, title: InvertMask, class type: InvertMask Executing node 33, title: InpaintModelConditioning, class type: InpaintModelConditioning Requested to load AutoencoderKL Loading 1 new model Executing node 61, title: Load Advanced ControlNet Model 🛂🅐🅒🅝, class type: ControlNetLoaderAdvanced Executing node 62, title: InvertMask, class type: InvertMask Executing node 60, title: Apply Advanced ControlNet 🛂🅐🅒🅝, class type: ACN_AdvancedControlNetApply Executing node 3, title: KSampler, class type: KSampler Requested to load ControlNet Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:11, 1.72it/s] 10%|█ | 2/20 [00:00<00:07, 2.44it/s] 15%|█▌ | 3/20 [00:01<00:06, 2.82it/s] 20%|██ | 4/20 [00:01<00:05, 3.05it/s] 25%|██▌ | 5/20 [00:01<00:04, 3.17it/s] 30%|███ | 6/20 [00:02<00:04, 3.26it/s] 35%|███▌ | 7/20 [00:02<00:03, 3.32it/s] 40%|████ | 8/20 [00:02<00:03, 3.37it/s] 45%|████▌ | 9/20 [00:02<00:02, 3.76it/s] 50%|█████ | 10/20 [00:03<00:02, 4.06it/s] 55%|█████▌ | 11/20 [00:03<00:02, 4.32it/s] 60%|██████ | 12/20 [00:03<00:01, 4.53it/s] 65%|██████▌ | 13/20 [00:03<00:01, 4.69it/s] 70%|███████ | 14/20 [00:03<00:01, 4.81it/s] 75%|███████▌ | 15/20 [00:03<00:01, 4.87it/s] 80%|████████ | 16/20 [00:04<00:00, 4.95it/s] 85%|████████▌ | 17/20 [00:04<00:00, 4.98it/s] 90%|█████████ | 18/20 [00:04<00:00, 5.02it/s] 95%|█████████▌| 19/20 [00:04<00:00, 5.08it/s] 100%|██████████| 20/20 [00:04<00:00, 5.13it/s] 100%|██████████| 20/20 [00:04<00:00, 4.03it/s] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 58, title: Save Image, class type: SaveImage Prompt executed in 23.41 seconds outputs: {'17': {'images': [{'filename': 'ComfyUI_temp_hnpkt_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '27': {'images': [{'filename': 'ComfyUI_temp_urdyl_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '31': {'images': [{'filename': 'ComfyUI_temp_aybqh_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '36': {'images': [{'filename': 'ComfyUI_temp_rrmos_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '44': {'images': [{'filename': 'ComfyUI_temp_xbtcp_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '57': {'images': [{'filename': 'ComfyUI_temp_pmjvp_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '58': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== Contents of /tmp/outputs: ComfyUI_00001_.png
Prediction
jschoormans/comfyui-interior-remodel:4c0cf5c14b3a375db28df1c15c75f6f6fcd3597a78f5ec6d89a400568cc17b7aIDgtsgf3xm7drgj0cfqx8a6yp658StatusSucceededSourceWebHardwareA100 (40GB)Total durationCreatedby @jschoormansInput
- prompt
- photo of a beautiful living room, modern design, modernist, cozy high resolution, highly detailed, 4k
- output_format
- webp
- output_quality
- 80
- negative_prompt
- blurry, illustration, distorted, horror
- randomise_seeds
- return_temp_files
{ "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": true, "return_temp_files": false }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run jschoormans/comfyui-interior-remodel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "jschoormans/comfyui-interior-remodel:4c0cf5c14b3a375db28df1c15c75f6f6fcd3597a78f5ec6d89a400568cc17b7a", { input: { image: "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", prompt: "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", output_format: "webp", output_quality: 80, negative_prompt: "blurry, illustration, distorted, horror", randomise_seeds: true, return_temp_files: false } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run jschoormans/comfyui-interior-remodel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "jschoormans/comfyui-interior-remodel:4c0cf5c14b3a375db28df1c15c75f6f6fcd3597a78f5ec6d89a400568cc17b7a", input={ "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": True, "return_temp_files": False } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run jschoormans/comfyui-interior-remodel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "jschoormans/comfyui-interior-remodel:4c0cf5c14b3a375db28df1c15c75f6f6fcd3597a78f5ec6d89a400568cc17b7a", "input": { "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": true, "return_temp_files": false } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-05-28T13:08:14.565455Z", "created_at": "2024-05-28T13:08:08.379000Z", "data_removed": false, "error": null, "id": "gtsgf3xm7drgj0cfqx8a6yp658", "input": { "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": true, "return_temp_files": false }, "logs": "Checking inputs\n✅ /tmp/inputs/input.jpg\n====================================\nChecking weights\n✅ realvisxlV40_v30InpaintBakedvae.safetensors\n✅ ZoeD_M12_N.pt\n✅ depth-zoe-xl-v1.0-controlnet.safetensors\n====================================\nRandomising seed to 3896971935\nRunning workflow\ngot prompt\nExecuting node 3, title: KSampler, class type: KSampler\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:06, 2.95it/s]\n 10%|█ | 2/20 [00:00<00:05, 3.20it/s]\n 15%|█▌ | 3/20 [00:00<00:05, 3.30it/s]\n 20%|██ | 4/20 [00:01<00:04, 3.38it/s]\n 25%|██▌ | 5/20 [00:01<00:04, 3.42it/s]\n 30%|███ | 6/20 [00:01<00:04, 3.45it/s]\n 35%|███▌ | 7/20 [00:02<00:03, 3.47it/s]\n 40%|████ | 8/20 [00:02<00:03, 3.47it/s]\n 45%|████▌ | 9/20 [00:02<00:02, 3.84it/s]\n 50%|█████ | 10/20 [00:02<00:02, 4.13it/s]\n 55%|█████▌ | 11/20 [00:02<00:02, 4.37it/s]\n 60%|██████ | 12/20 [00:03<00:01, 4.57it/s]\n 65%|██████▌ | 13/20 [00:03<00:01, 4.71it/s]\n 70%|███████ | 14/20 [00:03<00:01, 4.83it/s]\n 75%|███████▌ | 15/20 [00:03<00:01, 4.90it/s]\n 80%|████████ | 16/20 [00:03<00:00, 4.97it/s]\n 85%|████████▌ | 17/20 [00:04<00:00, 5.00it/s]\n 90%|█████████ | 18/20 [00:04<00:00, 5.01it/s]\n 95%|█████████▌| 19/20 [00:04<00:00, 5.06it/s]\n100%|██████████| 20/20 [00:04<00:00, 5.11it/s]\n100%|██████████| 20/20 [00:04<00:00, 4.24it/s]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 58, title: Save Image, class type: SaveImage\nPrompt executed in 5.26 seconds\noutputs: {'17': {'images': [{'filename': 'ComfyUI_temp_hnpkt_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '27': {'images': [{'filename': 'ComfyUI_temp_urdyl_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '31': {'images': [{'filename': 'ComfyUI_temp_aybqh_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '36': {'images': [{'filename': 'ComfyUI_temp_rrmos_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '44': {'images': [{'filename': 'ComfyUI_temp_xbtcp_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '57': {'images': [{'filename': 'ComfyUI_temp_pmjvp_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '58': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\nContents of /tmp/outputs:\nComfyUI_00001_.png", "metrics": { "predict_time": 6.16576, "total_time": 6.186455 }, "output": [ "https://replicate.delivery/pbxt/xtslttxBIRbtLdlf4yeJsfSjeraLoc3UptfPQGOwr8uynEHXC/ComfyUI_00001_.webp" ], "started_at": "2024-05-28T13:08:08.399695Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/gtsgf3xm7drgj0cfqx8a6yp658", "cancel": "https://api.replicate.com/v1/predictions/gtsgf3xm7drgj0cfqx8a6yp658/cancel" }, "version": "4c0cf5c14b3a375db28df1c15c75f6f6fcd3597a78f5ec6d89a400568cc17b7a" }
Generated inChecking inputs ✅ /tmp/inputs/input.jpg ==================================== Checking weights ✅ realvisxlV40_v30InpaintBakedvae.safetensors ✅ ZoeD_M12_N.pt ✅ depth-zoe-xl-v1.0-controlnet.safetensors ==================================== Randomising seed to 3896971935 Running workflow got prompt Executing node 3, title: KSampler, class type: KSampler 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:06, 2.95it/s] 10%|█ | 2/20 [00:00<00:05, 3.20it/s] 15%|█▌ | 3/20 [00:00<00:05, 3.30it/s] 20%|██ | 4/20 [00:01<00:04, 3.38it/s] 25%|██▌ | 5/20 [00:01<00:04, 3.42it/s] 30%|███ | 6/20 [00:01<00:04, 3.45it/s] 35%|███▌ | 7/20 [00:02<00:03, 3.47it/s] 40%|████ | 8/20 [00:02<00:03, 3.47it/s] 45%|████▌ | 9/20 [00:02<00:02, 3.84it/s] 50%|█████ | 10/20 [00:02<00:02, 4.13it/s] 55%|█████▌ | 11/20 [00:02<00:02, 4.37it/s] 60%|██████ | 12/20 [00:03<00:01, 4.57it/s] 65%|██████▌ | 13/20 [00:03<00:01, 4.71it/s] 70%|███████ | 14/20 [00:03<00:01, 4.83it/s] 75%|███████▌ | 15/20 [00:03<00:01, 4.90it/s] 80%|████████ | 16/20 [00:03<00:00, 4.97it/s] 85%|████████▌ | 17/20 [00:04<00:00, 5.00it/s] 90%|█████████ | 18/20 [00:04<00:00, 5.01it/s] 95%|█████████▌| 19/20 [00:04<00:00, 5.06it/s] 100%|██████████| 20/20 [00:04<00:00, 5.11it/s] 100%|██████████| 20/20 [00:04<00:00, 4.24it/s] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 58, title: Save Image, class type: SaveImage Prompt executed in 5.26 seconds outputs: {'17': {'images': [{'filename': 'ComfyUI_temp_hnpkt_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '27': {'images': [{'filename': 'ComfyUI_temp_urdyl_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '31': {'images': [{'filename': 'ComfyUI_temp_aybqh_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '36': {'images': [{'filename': 'ComfyUI_temp_rrmos_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '44': {'images': [{'filename': 'ComfyUI_temp_xbtcp_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '57': {'images': [{'filename': 'ComfyUI_temp_pmjvp_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '58': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== Contents of /tmp/outputs: ComfyUI_00001_.png
Prediction
jschoormans/comfyui-interior-remodel:2e2e6c51a6ff3cbc5006ca55d453b10d84097a02a4f9297b731983d5d1293c21IDtag237fdhhrj00chav7rhvyr34StatusSucceededSourceWebHardwareA100 (80GB)Total durationCreatedInput
- prompt
- photo of a beautiful living room, modern design, modernist, cozy high resolution, highly detailed, 4k
- output_format
- webp
- output_quality
- 80
- negative_prompt
- blurry, illustration, distorted, horror
- randomise_seeds
- return_temp_files
{ "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": true, "return_temp_files": false }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run jschoormans/comfyui-interior-remodel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "jschoormans/comfyui-interior-remodel:2e2e6c51a6ff3cbc5006ca55d453b10d84097a02a4f9297b731983d5d1293c21", { input: { image: "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", prompt: "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", output_format: "webp", output_quality: 80, negative_prompt: "blurry, illustration, distorted, horror", randomise_seeds: true, return_temp_files: false } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run jschoormans/comfyui-interior-remodel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "jschoormans/comfyui-interior-remodel:2e2e6c51a6ff3cbc5006ca55d453b10d84097a02a4f9297b731983d5d1293c21", input={ "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": True, "return_temp_files": False } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run jschoormans/comfyui-interior-remodel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "jschoormans/comfyui-interior-remodel:2e2e6c51a6ff3cbc5006ca55d453b10d84097a02a4f9297b731983d5d1293c21", "input": { "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": true, "return_temp_files": false } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-08-15T16:20:27.016383Z", "created_at": "2024-08-15T16:12:33.036000Z", "data_removed": false, "error": null, "id": "tag237fdhhrj00chav7rhvyr34", "input": { "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": true, "return_temp_files": false }, "logs": "Checking inputs\n✅ /tmp/inputs/input.jpg\n====================================\nChecking weights\n✅ depth-zoe-xl-v1.0-controlnet.safetensors\n✅ realvisxlV40_v30InpaintBakedvae.safetensors\n✅ ZoeD_M12_N.pt\n====================================\nRandomising seed to 1301705733\nRunning workflow\ngot prompt\nExecuting node 4, title: Load Checkpoint, class type: CheckpointLoaderSimple\nmodel_type EPS\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nloaded straight to GPU\nRequested to load SDXL\nLoading 1 new model\nExecuting node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nRequested to load SDXLClipModel\nLoading 1 new model\nExecuting node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nExecuting node 12, title: Load Image, class type: LoadImage\nExecuting node 32, title: 🔧 Image Resize, class type: ImageResize+\nExecuting node 30, title: CLIPSeg, class type: CLIPSeg\npreprocessor_config.json: 0%| | 0.00/380 [00:00<?, ?B/s]\npreprocessor_config.json: 100%|██████████| 380/380 [00:00<00:00, 2.81MB/s]\ntokenizer_config.json: 0%| | 0.00/974 [00:00<?, ?B/s]\ntokenizer_config.json: 100%|██████████| 974/974 [00:00<00:00, 8.60MB/s]\nvocab.json: 0%| | 0.00/1.06M [00:00<?, ?B/s]\nvocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 5.72MB/s]\nvocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 5.70MB/s]\nmerges.txt: 0%| | 0.00/525k [00:00<?, ?B/s]\nmerges.txt: 100%|██████████| 525k/525k [00:00<00:00, 4.22MB/s]\nmerges.txt: 100%|██████████| 525k/525k [00:00<00:00, 4.20MB/s]\nspecial_tokens_map.json: 0%| | 0.00/472 [00:00<?, ?B/s]\nspecial_tokens_map.json: 100%|██████████| 472/472 [00:00<00:00, 4.00MB/s]\nconfig.json: 0%| | 0.00/4.73k [00:00<?, ?B/s]\nconfig.json: 100%|██████████| 4.73k/4.73k [00:00<00:00, 30.8MB/s]\npytorch_model.bin: 0%| | 0.00/603M [00:00<?, ?B/s]\npytorch_model.bin: 3%|▎ | 21.0M/603M [00:00<00:03, 152MB/s]\npytorch_model.bin: 9%|▊ | 52.4M/603M [00:00<00:02, 224MB/s]\npytorch_model.bin: 17%|█▋ | 105M/603M [00:00<00:01, 324MB/s] \npytorch_model.bin: 28%|██▊ | 168M/603M [00:00<00:01, 416MB/s]\npytorch_model.bin: 38%|███▊ | 231M/603M [00:00<00:00, 463MB/s]\npytorch_model.bin: 47%|████▋ | 283M/603M [00:00<00:00, 475MB/s]\npytorch_model.bin: 56%|█████▌ | 336M/603M [00:00<00:00, 476MB/s]\npytorch_model.bin: 64%|██████▍ | 388M/603M [00:00<00:00, 463MB/s]\npytorch_model.bin: 73%|███████▎ | 440M/603M [00:01<00:00, 458MB/s]\npytorch_model.bin: 82%|████████▏ | 493M/603M [00:01<00:00, 467MB/s]\npytorch_model.bin: 90%|█████████ | 545M/603M [00:01<00:00, 451MB/s]\npytorch_model.bin: 99%|█████████▉| 598M/603M [00:01<00:00, 463MB/s]\npytorch_model.bin: 100%|██████████| 603M/603M [00:01<00:00, 433MB/s]\nExecuting node 35, title: CLIPSeg, class type: CLIPSeg\nExecuting node 26, title: CLIPSeg, class type: CLIPSeg\nExecuting node 45, title: CombineSegMasks, class type: CombineSegMasks\nExecuting node 65, title: ThresholdMask, class type: ThresholdMask\nExecuting node 34, title: InvertMask, class type: InvertMask\nExecuting node 33, title: InpaintModelConditioning, class type: InpaintModelConditioning\nRequested to load AutoencoderKL\nLoading 1 new model\nExecuting node 61, title: Load Advanced ControlNet Model 🛂🅐🅒🅝, class type: ControlNetLoaderAdvanced\nExecuting node 21, title: Zoe Depth Map, class type: Zoe-DepthMapPreprocessor\nmodel_path is /src/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts/lllyasviel/Annotators/ZoeD_M12_N.pt\nExecuting node 56, title: CLIPSeg, class type: CLIPSeg\nExecuting node 62, title: InvertMask, class type: InvertMask\nExecuting node 60, title: Apply Advanced ControlNet 🛂🅐🅒🅝, class type: ACN_AdvancedControlNetApply\nExecuting node 3, title: KSampler, class type: KSampler\nRequested to load ControlNet\nLoading 1 new model\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:07, 2.70it/s]\n 10%|█ | 2/20 [00:00<00:05, 3.47it/s]\n 15%|█▌ | 3/20 [00:00<00:04, 3.81it/s]\n 20%|██ | 4/20 [00:01<00:03, 4.00it/s]\n 25%|██▌ | 5/20 [00:01<00:03, 4.11it/s]\n 30%|███ | 6/20 [00:01<00:03, 4.18it/s]\n 35%|███▌ | 7/20 [00:01<00:03, 4.23it/s]\n 40%|████ | 8/20 [00:01<00:02, 4.25it/s]\n 45%|████▌ | 9/20 [00:02<00:02, 4.74it/s]\n 50%|█████ | 10/20 [00:02<00:01, 5.14it/s]\n 55%|█████▌ | 11/20 [00:02<00:01, 5.45it/s]\n 60%|██████ | 12/20 [00:02<00:01, 5.69it/s]\n 65%|██████▌ | 13/20 [00:02<00:01, 5.87it/s]\n 70%|███████ | 14/20 [00:02<00:00, 6.01it/s]\n 75%|███████▌ | 15/20 [00:03<00:00, 6.10it/s]\n 80%|████████ | 16/20 [00:03<00:00, 6.18it/s]\n 85%|████████▌ | 17/20 [00:03<00:00, 6.23it/s]\n 90%|█████████ | 18/20 [00:03<00:00, 6.27it/s]\n 95%|█████████▌| 19/20 [00:03<00:00, 6.33it/s]\n100%|██████████| 20/20 [00:03<00:00, 6.40it/s]\n100%|██████████| 20/20 [00:03<00:00, 5.16it/s]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 58, title: Save Image, class type: SaveImage\nPrompt executed in 22.11 seconds\noutputs: {'58': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\nContents of /tmp/outputs:\nComfyUI_00001_.png", "metrics": { "predict_time": 23.228700658, "total_time": 473.980383 }, "output": [ "https://replicate.delivery/yhqm/V1hH4iadUNYlKtXdGwYt19usNi2RS9MuCfs0TiDrtdIl5eSTA/ComfyUI_00001_.webp" ], "started_at": "2024-08-15T16:20:03.787682Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/tag237fdhhrj00chav7rhvyr34", "cancel": "https://api.replicate.com/v1/predictions/tag237fdhhrj00chav7rhvyr34/cancel" }, "version": "2e2e6c51a6ff3cbc5006ca55d453b10d84097a02a4f9297b731983d5d1293c21" }
Generated inChecking inputs ✅ /tmp/inputs/input.jpg ==================================== Checking weights ✅ depth-zoe-xl-v1.0-controlnet.safetensors ✅ realvisxlV40_v30InpaintBakedvae.safetensors ✅ ZoeD_M12_N.pt ==================================== Randomising seed to 1301705733 Running workflow got prompt Executing node 4, title: Load Checkpoint, class type: CheckpointLoaderSimple model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE loaded straight to GPU Requested to load SDXL Loading 1 new model Executing node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Requested to load SDXLClipModel Loading 1 new model Executing node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Executing node 12, title: Load Image, class type: LoadImage Executing node 32, title: 🔧 Image Resize, class type: ImageResize+ Executing node 30, title: CLIPSeg, class type: CLIPSeg preprocessor_config.json: 0%| | 0.00/380 [00:00<?, ?B/s] preprocessor_config.json: 100%|██████████| 380/380 [00:00<00:00, 2.81MB/s] tokenizer_config.json: 0%| | 0.00/974 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 974/974 [00:00<00:00, 8.60MB/s] vocab.json: 0%| | 0.00/1.06M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 5.72MB/s] vocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 5.70MB/s] merges.txt: 0%| | 0.00/525k [00:00<?, ?B/s] merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 4.22MB/s] merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 4.20MB/s] special_tokens_map.json: 0%| | 0.00/472 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 472/472 [00:00<00:00, 4.00MB/s] config.json: 0%| | 0.00/4.73k [00:00<?, ?B/s] config.json: 100%|██████████| 4.73k/4.73k [00:00<00:00, 30.8MB/s] pytorch_model.bin: 0%| | 0.00/603M [00:00<?, ?B/s] pytorch_model.bin: 3%|▎ | 21.0M/603M [00:00<00:03, 152MB/s] pytorch_model.bin: 9%|▊ | 52.4M/603M [00:00<00:02, 224MB/s] pytorch_model.bin: 17%|█▋ | 105M/603M [00:00<00:01, 324MB/s] pytorch_model.bin: 28%|██▊ | 168M/603M [00:00<00:01, 416MB/s] pytorch_model.bin: 38%|███▊ | 231M/603M [00:00<00:00, 463MB/s] pytorch_model.bin: 47%|████▋ | 283M/603M [00:00<00:00, 475MB/s] pytorch_model.bin: 56%|█████▌ | 336M/603M [00:00<00:00, 476MB/s] pytorch_model.bin: 64%|██████▍ | 388M/603M [00:00<00:00, 463MB/s] pytorch_model.bin: 73%|███████▎ | 440M/603M [00:01<00:00, 458MB/s] pytorch_model.bin: 82%|████████▏ | 493M/603M [00:01<00:00, 467MB/s] pytorch_model.bin: 90%|█████████ | 545M/603M [00:01<00:00, 451MB/s] pytorch_model.bin: 99%|█████████▉| 598M/603M [00:01<00:00, 463MB/s] pytorch_model.bin: 100%|██████████| 603M/603M [00:01<00:00, 433MB/s] Executing node 35, title: CLIPSeg, class type: CLIPSeg Executing node 26, title: CLIPSeg, class type: CLIPSeg Executing node 45, title: CombineSegMasks, class type: CombineSegMasks Executing node 65, title: ThresholdMask, class type: ThresholdMask Executing node 34, title: InvertMask, class type: InvertMask Executing node 33, title: InpaintModelConditioning, class type: InpaintModelConditioning Requested to load AutoencoderKL Loading 1 new model Executing node 61, title: Load Advanced ControlNet Model 🛂🅐🅒🅝, class type: ControlNetLoaderAdvanced Executing node 21, title: Zoe Depth Map, class type: Zoe-DepthMapPreprocessor model_path is /src/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts/lllyasviel/Annotators/ZoeD_M12_N.pt Executing node 56, title: CLIPSeg, class type: CLIPSeg Executing node 62, title: InvertMask, class type: InvertMask Executing node 60, title: Apply Advanced ControlNet 🛂🅐🅒🅝, class type: ACN_AdvancedControlNetApply Executing node 3, title: KSampler, class type: KSampler Requested to load ControlNet Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:07, 2.70it/s] 10%|█ | 2/20 [00:00<00:05, 3.47it/s] 15%|█▌ | 3/20 [00:00<00:04, 3.81it/s] 20%|██ | 4/20 [00:01<00:03, 4.00it/s] 25%|██▌ | 5/20 [00:01<00:03, 4.11it/s] 30%|███ | 6/20 [00:01<00:03, 4.18it/s] 35%|███▌ | 7/20 [00:01<00:03, 4.23it/s] 40%|████ | 8/20 [00:01<00:02, 4.25it/s] 45%|████▌ | 9/20 [00:02<00:02, 4.74it/s] 50%|█████ | 10/20 [00:02<00:01, 5.14it/s] 55%|█████▌ | 11/20 [00:02<00:01, 5.45it/s] 60%|██████ | 12/20 [00:02<00:01, 5.69it/s] 65%|██████▌ | 13/20 [00:02<00:01, 5.87it/s] 70%|███████ | 14/20 [00:02<00:00, 6.01it/s] 75%|███████▌ | 15/20 [00:03<00:00, 6.10it/s] 80%|████████ | 16/20 [00:03<00:00, 6.18it/s] 85%|████████▌ | 17/20 [00:03<00:00, 6.23it/s] 90%|█████████ | 18/20 [00:03<00:00, 6.27it/s] 95%|█████████▌| 19/20 [00:03<00:00, 6.33it/s] 100%|██████████| 20/20 [00:03<00:00, 6.40it/s] 100%|██████████| 20/20 [00:03<00:00, 5.16it/s] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 58, title: Save Image, class type: SaveImage Prompt executed in 22.11 seconds outputs: {'58': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== Contents of /tmp/outputs: ComfyUI_00001_.png
Prediction
jschoormans/comfyui-interior-remodel:2a360362540e1f6cfe59c9db4aa8aa9059233d40e638aae0cdeb6b41f3d0dcceIDv6cxbjy905rj00chz6xsyt0gd8StatusSucceededSourceWebHardwareA100 (80GB)Total durationCreatedInput
- prompt
- photo of a beautiful living room, modern design, modernist, cozy high resolution, highly detailed, 4k
- output_format
- webp
- output_quality
- 80
- negative_prompt
- blurry, illustration, distorted, horror
- randomise_seeds
- return_temp_files
{ "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": true, "return_temp_files": false }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run jschoormans/comfyui-interior-remodel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "jschoormans/comfyui-interior-remodel:2a360362540e1f6cfe59c9db4aa8aa9059233d40e638aae0cdeb6b41f3d0dcce", { input: { image: "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", prompt: "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", output_format: "webp", output_quality: 80, negative_prompt: "blurry, illustration, distorted, horror", randomise_seeds: true, return_temp_files: false } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run jschoormans/comfyui-interior-remodel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "jschoormans/comfyui-interior-remodel:2a360362540e1f6cfe59c9db4aa8aa9059233d40e638aae0cdeb6b41f3d0dcce", input={ "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": True, "return_temp_files": False } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run jschoormans/comfyui-interior-remodel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "jschoormans/comfyui-interior-remodel:2a360362540e1f6cfe59c9db4aa8aa9059233d40e638aae0cdeb6b41f3d0dcce", "input": { "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": true, "return_temp_files": false } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-09-16T07:29:38.394997Z", "created_at": "2024-09-16T07:28:39.169000Z", "data_removed": false, "error": null, "id": "v6cxbjy905rj00chz6xsyt0gd8", "input": { "image": "https://replicate.delivery/pbxt/KzvFakocwJYReDGIchM3ErNLOQK93fuzCTsTehB71ebUuiDP/224%202160.jpg", "prompt": "photo of a beautiful living room, modern design, modernist, cozy\nhigh resolution, highly detailed, 4k", "output_format": "webp", "output_quality": 80, "negative_prompt": "blurry, illustration, distorted, horror", "randomise_seeds": true, "return_temp_files": false }, "logs": "Checking inputs\n✅ /tmp/inputs/input.jpg\n====================================\nChecking weights\n✅ ZoeD_M12_N.pt\n✅ realvisxlV40_v30InpaintBakedvae.safetensors\n✅ depth-zoe-xl-v1.0-controlnet.safetensors\n====================================\nRandomising seed to 297588577\nRunning workflow\ngot prompt\nExecuting node 12, title: Load Image, class type: LoadImage\nExecuting node 107, title: Preview Image, class type: PreviewImage\nExecuting node 32, title: 🔧 Image Resize, class type: ImageResize+\nExecuting node 21, title: Zoe Depth Map, class type: Zoe-DepthMapPreprocessor\nmodel_path is /src/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts/lllyasviel/Annotators/ZoeD_M12_N.pt\nExecuting node 109, title: Preview Image, class type: PreviewImage\nExecuting node 66, title: CLIPSeg, class type: CLIPSeg\npreprocessor_config.json: 0%| | 0.00/380 [00:00<?, ?B/s]\npreprocessor_config.json: 100%|██████████| 380/380 [00:00<00:00, 2.64MB/s]\ntokenizer_config.json: 0%| | 0.00/974 [00:00<?, ?B/s]\ntokenizer_config.json: 100%|██████████| 974/974 [00:00<00:00, 8.04MB/s]\nvocab.json: 0%| | 0.00/1.06M [00:00<?, ?B/s]\nvocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 5.65MB/s]\nvocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 5.63MB/s]\nmerges.txt: 0%| | 0.00/525k [00:00<?, ?B/s]\nmerges.txt: 100%|██████████| 525k/525k [00:00<00:00, 2.84MB/s]\nmerges.txt: 100%|██████████| 525k/525k [00:00<00:00, 2.83MB/s]\nspecial_tokens_map.json: 0%| | 0.00/472 [00:00<?, ?B/s]\nspecial_tokens_map.json: 100%|██████████| 472/472 [00:00<00:00, 4.11MB/s]\nconfig.json: 0%| | 0.00/4.73k [00:00<?, ?B/s]\nconfig.json: 100%|██████████| 4.73k/4.73k [00:00<00:00, 30.9MB/s]\npytorch_model.bin: 0%| | 0.00/603M [00:00<?, ?B/s]\npytorch_model.bin: 7%|▋ | 41.9M/603M [00:00<00:01, 395MB/s]\npytorch_model.bin: 17%|█▋ | 105M/603M [00:00<00:01, 496MB/s] \npytorch_model.bin: 26%|██▌ | 157M/603M [00:00<00:00, 477MB/s]\npytorch_model.bin: 38%|███▊ | 231M/603M [00:00<00:00, 543MB/s]\npytorch_model.bin: 50%|█████ | 304M/603M [00:00<00:00, 575MB/s]\npytorch_model.bin: 61%|██████ | 367M/603M [00:00<00:00, 579MB/s]\npytorch_model.bin: 71%|███████▏ | 430M/603M [00:00<00:00, 582MB/s]\npytorch_model.bin: 82%|████████▏ | 493M/603M [00:00<00:00, 571MB/s]\npytorch_model.bin: 94%|█████████▍| 566M/603M [00:01<00:00, 591MB/s]\npytorch_model.bin: 100%|██████████| 603M/603M [00:01<00:00, 565MB/s]\nExecuting node 67, title: Preview Image, class type: PreviewImage\nExecuting node 68, title: Preview Image, class type: PreviewImage\nExecuting node 35, title: CLIPSeg, class type: CLIPSeg\nExecuting node 69, title: Preview Image, class type: PreviewImage\nExecuting node 70, title: Preview Image, class type: PreviewImage\nExecuting node 56, title: CLIPSeg, class type: CLIPSeg\nExecuting node 72, title: Preview Image, class type: PreviewImage\nExecuting node 73, title: Preview Image, class type: PreviewImage\nExecuting node 30, title: CLIPSeg, class type: CLIPSeg\nExecuting node 74, title: Preview Image, class type: PreviewImage\nExecuting node 75, title: Preview Image, class type: PreviewImage\nExecuting node 110, title: Bitwise(MASK - MASK), class type: SubtractMask\nExecuting node 26, title: CLIPSeg, class type: CLIPSeg\nExecuting node 45, title: CombineSegMasks, class type: CombineSegMasks\nExecuting node 89, title: Gaussian Blur Mask, class type: ImpactGaussianBlurMask\nExecuting node 65, title: ThresholdMask, class type: ThresholdMask\nExecuting node 34, title: InvertMask, class type: InvertMask\nExecuting node 76, title: 🔧 Mask Preview, class type: MaskPreview+\nExecuting node 81, title: Convert Mask to Image, class type: MaskToImage\nExecuting node 79, title: Cut By Mask, class type: Cut By Mask\nExecuting node 80, title: Preview Image, class type: PreviewImage\nExecuting node 87, title: Convert Mask to Image, class type: MaskToImage\nExecuting node 85, title: Cut By Mask, class type: Cut By Mask\nExecuting node 86, title: Preview Image, class type: PreviewImage\nExecuting node 4, title: Load Checkpoint, class type: CheckpointLoaderSimple\nmodel_type EPS\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nloaded straight to GPU\nRequested to load SDXL\nLoading 1 new model\nExecuting node 97, title: VAE Encode (for Inpainting), class type: VAEEncodeForInpaint\nRequested to load AutoencoderKL\nLoading 1 new model\nExecuting node 100, title: VAE Decode, class type: VAEDecode\nExecuting node 108, title: Preview Image, class type: PreviewImage\nExecuting node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nRequested to load SDXLClipModel\nLoading 1 new model\nExecuting node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nExecuting node 98, title: InpaintModelConditioning, class type: InpaintModelConditioning\nExecuting node 61, title: Load Advanced ControlNet Model 🛂🅐🅒🅝, class type: ControlNetLoaderAdvanced\nExecuting node 62, title: InvertMask, class type: InvertMask\nExecuting node 60, title: Apply Advanced ControlNet 🛂🅐🅒🅝, class type: ACN_AdvancedControlNetApply\nExecuting node 3, title: KSampler, class type: KSampler\nRequested to load ControlNet\nLoading 1 new model\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:05, 3.31it/s]\n 10%|█ | 2/20 [00:00<00:04, 3.83it/s]\n 15%|█▌ | 3/20 [00:00<00:04, 4.03it/s]\n 20%|██ | 4/20 [00:00<00:03, 4.14it/s]\n 25%|██▌ | 5/20 [00:01<00:03, 4.20it/s]\n 30%|███ | 6/20 [00:01<00:03, 4.23it/s]\n 35%|███▌ | 7/20 [00:01<00:03, 4.25it/s]\n 40%|████ | 8/20 [00:01<00:02, 4.26it/s]\n 45%|████▌ | 9/20 [00:02<00:02, 4.27it/s]\n 50%|█████ | 10/20 [00:02<00:02, 4.28it/s]\n 55%|█████▌ | 11/20 [00:02<00:02, 4.28it/s]\n 60%|██████ | 12/20 [00:02<00:01, 4.28it/s]\n 65%|██████▌ | 13/20 [00:03<00:01, 4.28it/s]\n 70%|███████ | 14/20 [00:03<00:01, 4.28it/s]\n 75%|███████▌ | 15/20 [00:03<00:01, 4.29it/s]\n 80%|████████ | 16/20 [00:03<00:00, 4.29it/s]\n 85%|████████▌ | 17/20 [00:03<00:00, 4.75it/s]\n 90%|█████████ | 18/20 [00:04<00:00, 5.13it/s]\n 95%|█████████▌| 19/20 [00:04<00:00, 5.46it/s]\n100%|██████████| 20/20 [00:04<00:00, 5.74it/s]\n100%|██████████| 20/20 [00:04<00:00, 4.52it/s]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 58, title: Save Image, class type: SaveImage\nPrompt executed in 26.41 seconds\noutputs: {'107': {'images': [{'filename': 'ComfyUI_temp_pjtzz_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '109': {'images': [{'filename': 'ComfyUI_temp_kerea_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '67': {'images': [{'filename': 'ComfyUI_temp_bjzgi_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '68': {'images': [{'filename': 'ComfyUI_temp_nqcdm_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '69': {'images': [{'filename': 'ComfyUI_temp_dgejg_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '70': {'images': [{'filename': 'ComfyUI_temp_xhtpe_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '72': {'images': [{'filename': 'ComfyUI_temp_mxddz_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '73': {'images': [{'filename': 'ComfyUI_temp_nxfco_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '74': {'images': [{'filename': 'ComfyUI_temp_gxaqs_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '75': {'images': [{'filename': 'ComfyUI_temp_cndrj_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '76': {'images': [{'filename': 'ComfyUI_temp_pggie_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '80': {'images': [{'filename': 'ComfyUI_temp_bmppd_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '86': {'images': [{'filename': 'ComfyUI_temp_hkthb_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '108': {'images': [{'filename': 'ComfyUI_temp_pefjv_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '58': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\nContents of /tmp/outputs:\nComfyUI_00001_.png", "metrics": { "predict_time": 27.269858369, "total_time": 59.225997 }, "output": [ "https://replicate.delivery/yhqm/PbSjmiA9xdZ4MxwX5cio0m4ozyCKC9hd0SOKfwB1SABxgsuJA/ComfyUI_00001_.webp" ], "started_at": "2024-09-16T07:29:11.125138Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/v6cxbjy905rj00chz6xsyt0gd8", "cancel": "https://api.replicate.com/v1/predictions/v6cxbjy905rj00chz6xsyt0gd8/cancel" }, "version": "2a360362540e1f6cfe59c9db4aa8aa9059233d40e638aae0cdeb6b41f3d0dcce" }
Generated inChecking inputs ✅ /tmp/inputs/input.jpg ==================================== Checking weights ✅ ZoeD_M12_N.pt ✅ realvisxlV40_v30InpaintBakedvae.safetensors ✅ depth-zoe-xl-v1.0-controlnet.safetensors ==================================== Randomising seed to 297588577 Running workflow got prompt Executing node 12, title: Load Image, class type: LoadImage Executing node 107, title: Preview Image, class type: PreviewImage Executing node 32, title: 🔧 Image Resize, class type: ImageResize+ Executing node 21, title: Zoe Depth Map, class type: Zoe-DepthMapPreprocessor model_path is /src/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts/lllyasviel/Annotators/ZoeD_M12_N.pt Executing node 109, title: Preview Image, class type: PreviewImage Executing node 66, title: CLIPSeg, class type: CLIPSeg preprocessor_config.json: 0%| | 0.00/380 [00:00<?, ?B/s] preprocessor_config.json: 100%|██████████| 380/380 [00:00<00:00, 2.64MB/s] tokenizer_config.json: 0%| | 0.00/974 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 974/974 [00:00<00:00, 8.04MB/s] vocab.json: 0%| | 0.00/1.06M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 5.65MB/s] vocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 5.63MB/s] merges.txt: 0%| | 0.00/525k [00:00<?, ?B/s] merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 2.84MB/s] merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 2.83MB/s] special_tokens_map.json: 0%| | 0.00/472 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 472/472 [00:00<00:00, 4.11MB/s] config.json: 0%| | 0.00/4.73k [00:00<?, ?B/s] config.json: 100%|██████████| 4.73k/4.73k [00:00<00:00, 30.9MB/s] pytorch_model.bin: 0%| | 0.00/603M [00:00<?, ?B/s] pytorch_model.bin: 7%|▋ | 41.9M/603M [00:00<00:01, 395MB/s] pytorch_model.bin: 17%|█▋ | 105M/603M [00:00<00:01, 496MB/s] pytorch_model.bin: 26%|██▌ | 157M/603M [00:00<00:00, 477MB/s] pytorch_model.bin: 38%|███▊ | 231M/603M [00:00<00:00, 543MB/s] pytorch_model.bin: 50%|█████ | 304M/603M [00:00<00:00, 575MB/s] pytorch_model.bin: 61%|██████ | 367M/603M [00:00<00:00, 579MB/s] pytorch_model.bin: 71%|███████▏ | 430M/603M [00:00<00:00, 582MB/s] pytorch_model.bin: 82%|████████▏ | 493M/603M [00:00<00:00, 571MB/s] pytorch_model.bin: 94%|█████████▍| 566M/603M [00:01<00:00, 591MB/s] pytorch_model.bin: 100%|██████████| 603M/603M [00:01<00:00, 565MB/s] Executing node 67, title: Preview Image, class type: PreviewImage Executing node 68, title: Preview Image, class type: PreviewImage Executing node 35, title: CLIPSeg, class type: CLIPSeg Executing node 69, title: Preview Image, class type: PreviewImage Executing node 70, title: Preview Image, class type: PreviewImage Executing node 56, title: CLIPSeg, class type: CLIPSeg Executing node 72, title: Preview Image, class type: PreviewImage Executing node 73, title: Preview Image, class type: PreviewImage Executing node 30, title: CLIPSeg, class type: CLIPSeg Executing node 74, title: Preview Image, class type: PreviewImage Executing node 75, title: Preview Image, class type: PreviewImage Executing node 110, title: Bitwise(MASK - MASK), class type: SubtractMask Executing node 26, title: CLIPSeg, class type: CLIPSeg Executing node 45, title: CombineSegMasks, class type: CombineSegMasks Executing node 89, title: Gaussian Blur Mask, class type: ImpactGaussianBlurMask Executing node 65, title: ThresholdMask, class type: ThresholdMask Executing node 34, title: InvertMask, class type: InvertMask Executing node 76, title: 🔧 Mask Preview, class type: MaskPreview+ Executing node 81, title: Convert Mask to Image, class type: MaskToImage Executing node 79, title: Cut By Mask, class type: Cut By Mask Executing node 80, title: Preview Image, class type: PreviewImage Executing node 87, title: Convert Mask to Image, class type: MaskToImage Executing node 85, title: Cut By Mask, class type: Cut By Mask Executing node 86, title: Preview Image, class type: PreviewImage Executing node 4, title: Load Checkpoint, class type: CheckpointLoaderSimple model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE loaded straight to GPU Requested to load SDXL Loading 1 new model Executing node 97, title: VAE Encode (for Inpainting), class type: VAEEncodeForInpaint Requested to load AutoencoderKL Loading 1 new model Executing node 100, title: VAE Decode, class type: VAEDecode Executing node 108, title: Preview Image, class type: PreviewImage Executing node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Requested to load SDXLClipModel Loading 1 new model Executing node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Executing node 98, title: InpaintModelConditioning, class type: InpaintModelConditioning Executing node 61, title: Load Advanced ControlNet Model 🛂🅐🅒🅝, class type: ControlNetLoaderAdvanced Executing node 62, title: InvertMask, class type: InvertMask Executing node 60, title: Apply Advanced ControlNet 🛂🅐🅒🅝, class type: ACN_AdvancedControlNetApply Executing node 3, title: KSampler, class type: KSampler Requested to load ControlNet Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:05, 3.31it/s] 10%|█ | 2/20 [00:00<00:04, 3.83it/s] 15%|█▌ | 3/20 [00:00<00:04, 4.03it/s] 20%|██ | 4/20 [00:00<00:03, 4.14it/s] 25%|██▌ | 5/20 [00:01<00:03, 4.20it/s] 30%|███ | 6/20 [00:01<00:03, 4.23it/s] 35%|███▌ | 7/20 [00:01<00:03, 4.25it/s] 40%|████ | 8/20 [00:01<00:02, 4.26it/s] 45%|████▌ | 9/20 [00:02<00:02, 4.27it/s] 50%|█████ | 10/20 [00:02<00:02, 4.28it/s] 55%|█████▌ | 11/20 [00:02<00:02, 4.28it/s] 60%|██████ | 12/20 [00:02<00:01, 4.28it/s] 65%|██████▌ | 13/20 [00:03<00:01, 4.28it/s] 70%|███████ | 14/20 [00:03<00:01, 4.28it/s] 75%|███████▌ | 15/20 [00:03<00:01, 4.29it/s] 80%|████████ | 16/20 [00:03<00:00, 4.29it/s] 85%|████████▌ | 17/20 [00:03<00:00, 4.75it/s] 90%|█████████ | 18/20 [00:04<00:00, 5.13it/s] 95%|█████████▌| 19/20 [00:04<00:00, 5.46it/s] 100%|██████████| 20/20 [00:04<00:00, 5.74it/s] 100%|██████████| 20/20 [00:04<00:00, 4.52it/s] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 58, title: Save Image, class type: SaveImage Prompt executed in 26.41 seconds outputs: {'107': {'images': [{'filename': 'ComfyUI_temp_pjtzz_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '109': {'images': [{'filename': 'ComfyUI_temp_kerea_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '67': {'images': [{'filename': 'ComfyUI_temp_bjzgi_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '68': {'images': [{'filename': 'ComfyUI_temp_nqcdm_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '69': {'images': [{'filename': 'ComfyUI_temp_dgejg_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '70': {'images': [{'filename': 'ComfyUI_temp_xhtpe_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '72': {'images': [{'filename': 'ComfyUI_temp_mxddz_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '73': {'images': [{'filename': 'ComfyUI_temp_nxfco_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '74': {'images': [{'filename': 'ComfyUI_temp_gxaqs_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '75': {'images': [{'filename': 'ComfyUI_temp_cndrj_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '76': {'images': [{'filename': 'ComfyUI_temp_pggie_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '80': {'images': [{'filename': 'ComfyUI_temp_bmppd_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '86': {'images': [{'filename': 'ComfyUI_temp_hkthb_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '108': {'images': [{'filename': 'ComfyUI_temp_pefjv_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '58': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== Contents of /tmp/outputs: ComfyUI_00001_.png
Want to make some of these yourself?
Run this model