adriiita
/
photoshoot
- Public
- 66 runs
-
A100 (80GB)
Prediction
adriiita/photoshoot:91185a74IDafhfv0h35nrj60cnqhaty22vncStatusSucceededSourceAPIHardwareA100 (80GB)Total durationCreatedInput
- prompt
- aesthetically structured room, moody vibe, soft natural light
- output_format
- webp
- output_quality
- 95
- negative_prompt
{ "image": "https://v5.airtableusercontent.com/v3/u/39/39/1742630400000/GCqOxyTzAHFfDZcicCYc5g/rZE-oKl4ieXy4OKTLt63cglb4qldtOXIbfQGY1w95VSN2_cIagzeLECE5rd1VMDe72xVa7XdYiX5r2Nl7a9jhrCA_xZVEe1MKectTU7dHpKOk-aroSqtt3j-SBuW0avcyDjjKuyj4vc25v-dftTNDDsNr3Dhz7JmkMXlB3MwLlM/BrDXZ3qfYLfvyZJ5Iu2JZfQjL37ZBqqajiuAzHKKWig", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run adriiita/photoshoot using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "adriiita/photoshoot:91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6", { input: { image: "https://v5.airtableusercontent.com/v3/u/39/39/1742630400000/GCqOxyTzAHFfDZcicCYc5g/rZE-oKl4ieXy4OKTLt63cglb4qldtOXIbfQGY1w95VSN2_cIagzeLECE5rd1VMDe72xVa7XdYiX5r2Nl7a9jhrCA_xZVEe1MKectTU7dHpKOk-aroSqtt3j-SBuW0avcyDjjKuyj4vc25v-dftTNDDsNr3Dhz7JmkMXlB3MwLlM/BrDXZ3qfYLfvyZJ5Iu2JZfQjL37ZBqqajiuAzHKKWig", prompt: "aesthetically structured room, moody vibe, soft natural light", output_format: "webp", output_quality: 95, negative_prompt: "" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run adriiita/photoshoot using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "adriiita/photoshoot:91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6", input={ "image": "https://v5.airtableusercontent.com/v3/u/39/39/1742630400000/GCqOxyTzAHFfDZcicCYc5g/rZE-oKl4ieXy4OKTLt63cglb4qldtOXIbfQGY1w95VSN2_cIagzeLECE5rd1VMDe72xVa7XdYiX5r2Nl7a9jhrCA_xZVEe1MKectTU7dHpKOk-aroSqtt3j-SBuW0avcyDjjKuyj4vc25v-dftTNDDsNr3Dhz7JmkMXlB3MwLlM/BrDXZ3qfYLfvyZJ5Iu2JZfQjL37ZBqqajiuAzHKKWig", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run adriiita/photoshoot using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6", "input": { "image": "https://v5.airtableusercontent.com/v3/u/39/39/1742630400000/GCqOxyTzAHFfDZcicCYc5g/rZE-oKl4ieXy4OKTLt63cglb4qldtOXIbfQGY1w95VSN2_cIagzeLECE5rd1VMDe72xVa7XdYiX5r2Nl7a9jhrCA_xZVEe1MKectTU7dHpKOk-aroSqtt3j-SBuW0avcyDjjKuyj4vc25v-dftTNDDsNr3Dhz7JmkMXlB3MwLlM/BrDXZ3qfYLfvyZJ5Iu2JZfQjL37ZBqqajiuAzHKKWig", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/adriiita/photoshoot@sha256:91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6 \ -i 'image="https://v5.airtableusercontent.com/v3/u/39/39/1742630400000/GCqOxyTzAHFfDZcicCYc5g/rZE-oKl4ieXy4OKTLt63cglb4qldtOXIbfQGY1w95VSN2_cIagzeLECE5rd1VMDe72xVa7XdYiX5r2Nl7a9jhrCA_xZVEe1MKectTU7dHpKOk-aroSqtt3j-SBuW0avcyDjjKuyj4vc25v-dftTNDDsNr3Dhz7JmkMXlB3MwLlM/BrDXZ3qfYLfvyZJ5Iu2JZfQjL37ZBqqajiuAzHKKWig"' \ -i 'prompt="aesthetically structured room, moody vibe, soft natural light"' \ -i 'output_format="webp"' \ -i 'output_quality=95' \ -i 'negative_prompt=""'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/adriiita/photoshoot@sha256:91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "image": "https://v5.airtableusercontent.com/v3/u/39/39/1742630400000/GCqOxyTzAHFfDZcicCYc5g/rZE-oKl4ieXy4OKTLt63cglb4qldtOXIbfQGY1w95VSN2_cIagzeLECE5rd1VMDe72xVa7XdYiX5r2Nl7a9jhrCA_xZVEe1MKectTU7dHpKOk-aroSqtt3j-SBuW0avcyDjjKuyj4vc25v-dftTNDDsNr3Dhz7JmkMXlB3MwLlM/BrDXZ3qfYLfvyZJ5Iu2JZfQjL37ZBqqajiuAzHKKWig", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2025-03-22T05:33:05.770815Z", "created_at": "2025-03-22T05:30:51.053000Z", "data_removed": false, "error": null, "id": "afhfv0h35nrj60cnqhaty22vnc", "input": { "image": "https://v5.airtableusercontent.com/v3/u/39/39/1742630400000/GCqOxyTzAHFfDZcicCYc5g/rZE-oKl4ieXy4OKTLt63cglb4qldtOXIbfQGY1w95VSN2_cIagzeLECE5rd1VMDe72xVa7XdYiX5r2Nl7a9jhrCA_xZVEe1MKectTU7dHpKOk-aroSqtt3j-SBuW0avcyDjjKuyj4vc25v-dftTNDDsNr3Dhz7JmkMXlB3MwLlM/BrDXZ3qfYLfvyZJ5Iu2JZfQjL37ZBqqajiuAzHKKWig", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" }, "logs": "Random seed set to: 1166715600\nChecking inputs\n====================================\nChecking weights\n✅ iclight_sd15_fc_unet_ldm.safetensors exists in ComfyUI/models/diffusion_models\n✅ realisticVisionV60B1_v51HyperVAE.safetensors exists in ComfyUI/models/checkpoints\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 78, title: Unknown, class type: LoadImage\nExecuting node 14, title: Unknown, class type: ImageResize+\nExecuting node 12, title: Unknown, class type: easy imageRemBg\nExecuting node 47, title: Unknown, class type: SplitImageWithAlpha\nExecuting node 4, title: Unknown, class type: CheckpointLoaderSimple\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type EPS\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16\n[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16\nExecuting node 42, title: Unknown, class type: EmptyLatentImage\n[ComfyUI] Requested to load AutoencoderKL\nExecuting node 43, title: Unknown, class type: VAEDecode\n[ComfyUI] loaded completely 75603.489112854 159.55708122253418 True\n[ComfyUI] FETCH ComfyRegistry Data: 5/79\nExecuting node 46, title: Unknown, class type: ImageCompositeMasked\nExecuting node 38, title: Unknown, class type: PreviewImage\nExecuting node 41, title: Unknown, class type: ICLightApplyMaskGrey\nExecuting node 24, title: Unknown, class type: VAEEncodeArgMax\nExecuting node 23, title: Unknown, class type: UNETLoader\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type EPS\nExecuting node 40, title: Unknown, class type: ICLightAppply\nExecuting node 58, title: Unknown, class type: easy ipadapterApply\n[ComfyUI] \u001b[1m\u001b[36m[EasyUse] easy ipadapterApply:\u001b[0m Using ClipVisonModel open_clip_model.safetensors\n[ComfyUI] \u001b[1m\u001b[36m[EasyUse] easy ipadapterApply:\u001b[0m Using IpAdapterModel ip-adapter-plus_sd15.safetensors\n[ComfyUI] Requested to load CLIPVisionModelProjection\n[ComfyUI] loaded completely 77333.30392341614 1208.09814453125 True\nExecuting node 59, title: Unknown, class type: PreviewImage\nExecuting node 37, title: Unknown, class type: VAEEncodeArgMax\nExecuting node 7, title: Unknown, class type: CLIPTextEncode\n[ComfyUI] Requested to load SD1ClipModel\n[ComfyUI] loaded completely 76005.40450935364 235.84423828125 True\nExecuting node 6, title: Unknown, class type: CLIPTextEncode\nExecuting node 16, title: Unknown, class type: KSampler\n[ComfyUI] Requested to load BaseModel\n[ComfyUI] Pad weight diffusion_model.input_blocks.0.0.weight from torch.Size([320, 4, 3, 3]) to shape: torch.Size([320, 8, 3, 3])\n[ComfyUI] loaded completely 75769.56016044617 1639.406135559082 True\n[ComfyUI]\n[ComfyUI] FETCH ComfyRegistry Data: 10/79\n[ComfyUI] 0%| | 0/25 [00:00<?, ?it/s]\n[ComfyUI] 4%|▍ | 1/25 [00:00<00:04, 5.70it/s]\n[ComfyUI] 8%|▊ | 2/25 [00:00<00:03, 6.57it/s]\n[ComfyUI] 12%|█▏ | 3/25 [00:00<00:03, 6.75it/s]\n[ComfyUI] 16%|█▌ | 4/25 [00:00<00:03, 6.89it/s]\n[ComfyUI] 20%|██ | 5/25 [00:00<00:02, 6.92it/s]\n[ComfyUI] 24%|██▍ | 6/25 [00:00<00:02, 6.99it/s]\n[ComfyUI] 28%|██▊ | 7/25 [00:01<00:02, 6.99it/s]\n[ComfyUI] 32%|███▏ | 8/25 [00:01<00:02, 6.95it/s]\n[ComfyUI] 36%|███▌ | 9/25 [00:01<00:02, 6.98it/s]\n[ComfyUI] 40%|████ | 10/25 [00:01<00:02, 7.09it/s]\n[ComfyUI] 44%|████▍ | 11/25 [00:01<00:01, 7.11it/s]\n[ComfyUI] 48%|████▊ | 12/25 [00:01<00:01, 7.14it/s]\n[ComfyUI] 52%|█████▏ | 13/25 [00:01<00:01, 7.07it/s]\n[ComfyUI] 56%|█████▌ | 14/25 [00:02<00:01, 7.10it/s]\n[ComfyUI] 60%|██████ | 15/25 [00:02<00:01, 7.20it/s]\n[ComfyUI] 64%|██████▍ | 16/25 [00:02<00:01, 7.33it/s]\n[ComfyUI] 68%|██████▊ | 17/25 [00:02<00:01, 7.36it/s]\n[ComfyUI] 72%|███████▏ | 18/25 [00:02<00:00, 7.37it/s]\n[ComfyUI] 76%|███████▌ | 19/25 [00:02<00:00, 7.38it/s]\n[ComfyUI] 80%|████████ | 20/25 [00:02<00:00, 7.39it/s]\n[ComfyUI] FETCH ComfyRegistry Data: 15/79\n[ComfyUI] 84%|████████▍ | 21/25 [00:02<00:00, 7.38it/s]\n[ComfyUI] 88%|████████▊ | 22/25 [00:03<00:00, 7.32it/s]\n[ComfyUI] 92%|█████████▏| 23/25 [00:03<00:00, 7.32it/s]\n[ComfyUI] 96%|█████████▌| 24/25 [00:03<00:00, 7.59it/s]\n[ComfyUI] 100%|██████████| 25/25 [00:03<00:00, 8.13it/s]\n[ComfyUI] 100%|██████████| 25/25 [00:03<00:00, 7.25it/s]\nExecuting node 17, title: Unknown, class type: VAEDecode\nExecuting node 61, title: Unknown, class type: PreviewImage\nExecuting node 51, title: Unknown, class type: DetailTransfer\nExecuting node 67, title: Unknown, class type: SaveImage\n[ComfyUI] Prompt executed in 8.54 seconds\noutputs: {'67': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}, '59': {'images': [{'filename': 'ComfyUI_temp_xcfgo_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '61': {'images': [{'filename': 'ComfyUI_temp_mpdyp_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '38': {'images': [{'filename': 'ComfyUI_temp_mxsgf_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '12': {'images': [{'filename': 'easyPreview_temp_mamnp_00001_.png', 'subfolder': '', 'type': 'temp'}]}}\n====================================\nComfyUI_00001_.png", "metrics": { "predict_time": 9.482153648, "total_time": 134.717815 }, "output": [ "https://replicate.delivery/yhqm/vxUsPyr8NSLxLdl6ysKwQGVqIfxX8kW7gjlLGeKbFCdR2f1oA/ComfyUI_00001_.webp" ], "started_at": "2025-03-22T05:32:56.288661Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/yswh-lg2i3jg4jurdimr7qlvqcj2xa6jgasugxfaiorddr62z3r4yzs5q", "get": "https://api.replicate.com/v1/predictions/afhfv0h35nrj60cnqhaty22vnc", "cancel": "https://api.replicate.com/v1/predictions/afhfv0h35nrj60cnqhaty22vnc/cancel" }, "version": "91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6" }
Generated inRandom seed set to: 1166715600 Checking inputs ==================================== Checking weights ✅ iclight_sd15_fc_unet_ldm.safetensors exists in ComfyUI/models/diffusion_models ✅ realisticVisionV60B1_v51HyperVAE.safetensors exists in ComfyUI/models/checkpoints ==================================== Running workflow [ComfyUI] got prompt Executing node 78, title: Unknown, class type: LoadImage Executing node 14, title: Unknown, class type: ImageResize+ Executing node 12, title: Unknown, class type: easy imageRemBg Executing node 47, title: Unknown, class type: SplitImageWithAlpha Executing node 4, title: Unknown, class type: CheckpointLoaderSimple [ComfyUI] model weight dtype torch.float16, manual cast: None [ComfyUI] model_type EPS [ComfyUI] Using pytorch attention in VAE [ComfyUI] Using pytorch attention in VAE [ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 [ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 Executing node 42, title: Unknown, class type: EmptyLatentImage [ComfyUI] Requested to load AutoencoderKL Executing node 43, title: Unknown, class type: VAEDecode [ComfyUI] loaded completely 75603.489112854 159.55708122253418 True [ComfyUI] FETCH ComfyRegistry Data: 5/79 Executing node 46, title: Unknown, class type: ImageCompositeMasked Executing node 38, title: Unknown, class type: PreviewImage Executing node 41, title: Unknown, class type: ICLightApplyMaskGrey Executing node 24, title: Unknown, class type: VAEEncodeArgMax Executing node 23, title: Unknown, class type: UNETLoader [ComfyUI] model weight dtype torch.float16, manual cast: None [ComfyUI] model_type EPS Executing node 40, title: Unknown, class type: ICLightAppply Executing node 58, title: Unknown, class type: easy ipadapterApply [ComfyUI] [EasyUse] easy ipadapterApply: Using ClipVisonModel open_clip_model.safetensors [ComfyUI] [EasyUse] easy ipadapterApply: Using IpAdapterModel ip-adapter-plus_sd15.safetensors [ComfyUI] Requested to load CLIPVisionModelProjection [ComfyUI] loaded completely 77333.30392341614 1208.09814453125 True Executing node 59, title: Unknown, class type: PreviewImage Executing node 37, title: Unknown, class type: VAEEncodeArgMax Executing node 7, title: Unknown, class type: CLIPTextEncode [ComfyUI] Requested to load SD1ClipModel [ComfyUI] loaded completely 76005.40450935364 235.84423828125 True Executing node 6, title: Unknown, class type: CLIPTextEncode Executing node 16, title: Unknown, class type: KSampler [ComfyUI] Requested to load BaseModel [ComfyUI] Pad weight diffusion_model.input_blocks.0.0.weight from torch.Size([320, 4, 3, 3]) to shape: torch.Size([320, 8, 3, 3]) [ComfyUI] loaded completely 75769.56016044617 1639.406135559082 True [ComfyUI] [ComfyUI] FETCH ComfyRegistry Data: 10/79 [ComfyUI] 0%| | 0/25 [00:00<?, ?it/s] [ComfyUI] 4%|▍ | 1/25 [00:00<00:04, 5.70it/s] [ComfyUI] 8%|▊ | 2/25 [00:00<00:03, 6.57it/s] [ComfyUI] 12%|█▏ | 3/25 [00:00<00:03, 6.75it/s] [ComfyUI] 16%|█▌ | 4/25 [00:00<00:03, 6.89it/s] [ComfyUI] 20%|██ | 5/25 [00:00<00:02, 6.92it/s] [ComfyUI] 24%|██▍ | 6/25 [00:00<00:02, 6.99it/s] [ComfyUI] 28%|██▊ | 7/25 [00:01<00:02, 6.99it/s] [ComfyUI] 32%|███▏ | 8/25 [00:01<00:02, 6.95it/s] [ComfyUI] 36%|███▌ | 9/25 [00:01<00:02, 6.98it/s] [ComfyUI] 40%|████ | 10/25 [00:01<00:02, 7.09it/s] [ComfyUI] 44%|████▍ | 11/25 [00:01<00:01, 7.11it/s] [ComfyUI] 48%|████▊ | 12/25 [00:01<00:01, 7.14it/s] [ComfyUI] 52%|█████▏ | 13/25 [00:01<00:01, 7.07it/s] [ComfyUI] 56%|█████▌ | 14/25 [00:02<00:01, 7.10it/s] [ComfyUI] 60%|██████ | 15/25 [00:02<00:01, 7.20it/s] [ComfyUI] 64%|██████▍ | 16/25 [00:02<00:01, 7.33it/s] [ComfyUI] 68%|██████▊ | 17/25 [00:02<00:01, 7.36it/s] [ComfyUI] 72%|███████▏ | 18/25 [00:02<00:00, 7.37it/s] [ComfyUI] 76%|███████▌ | 19/25 [00:02<00:00, 7.38it/s] [ComfyUI] 80%|████████ | 20/25 [00:02<00:00, 7.39it/s] [ComfyUI] FETCH ComfyRegistry Data: 15/79 [ComfyUI] 84%|████████▍ | 21/25 [00:02<00:00, 7.38it/s] [ComfyUI] 88%|████████▊ | 22/25 [00:03<00:00, 7.32it/s] [ComfyUI] 92%|█████████▏| 23/25 [00:03<00:00, 7.32it/s] [ComfyUI] 96%|█████████▌| 24/25 [00:03<00:00, 7.59it/s] [ComfyUI] 100%|██████████| 25/25 [00:03<00:00, 8.13it/s] [ComfyUI] 100%|██████████| 25/25 [00:03<00:00, 7.25it/s] Executing node 17, title: Unknown, class type: VAEDecode Executing node 61, title: Unknown, class type: PreviewImage Executing node 51, title: Unknown, class type: DetailTransfer Executing node 67, title: Unknown, class type: SaveImage [ComfyUI] Prompt executed in 8.54 seconds outputs: {'67': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}, '59': {'images': [{'filename': 'ComfyUI_temp_xcfgo_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '61': {'images': [{'filename': 'ComfyUI_temp_mpdyp_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '38': {'images': [{'filename': 'ComfyUI_temp_mxsgf_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '12': {'images': [{'filename': 'easyPreview_temp_mamnp_00001_.png', 'subfolder': '', 'type': 'temp'}]}} ==================================== ComfyUI_00001_.png
Prediction
adriiita/photoshoot:91185a74IDh8yy661qgxrj00cnqgfsscx460StatusSucceededSourceWebHardwareA100 (80GB)Total durationCreatedInput
- prompt
- aesthetically structured room, moody vibe, soft natural light
- output_format
- webp
- output_quality
- 95
- negative_prompt
{ "image": "https://replicate.delivery/pbxt/MhZaVa2AahPV3rjGoUZp7ipHvTm46dCBJp4xpqLULiTXubWS/photo_of_mblaze_%20%282%29.jpeg", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run adriiita/photoshoot using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "adriiita/photoshoot:91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6", { input: { image: "https://replicate.delivery/pbxt/MhZaVa2AahPV3rjGoUZp7ipHvTm46dCBJp4xpqLULiTXubWS/photo_of_mblaze_%20%282%29.jpeg", prompt: "aesthetically structured room, moody vibe, soft natural light", output_format: "webp", output_quality: 95, negative_prompt: "" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run adriiita/photoshoot using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "adriiita/photoshoot:91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6", input={ "image": "https://replicate.delivery/pbxt/MhZaVa2AahPV3rjGoUZp7ipHvTm46dCBJp4xpqLULiTXubWS/photo_of_mblaze_%20%282%29.jpeg", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run adriiita/photoshoot using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6", "input": { "image": "https://replicate.delivery/pbxt/MhZaVa2AahPV3rjGoUZp7ipHvTm46dCBJp4xpqLULiTXubWS/photo_of_mblaze_%20%282%29.jpeg", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/adriiita/photoshoot@sha256:91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6 \ -i 'image="https://replicate.delivery/pbxt/MhZaVa2AahPV3rjGoUZp7ipHvTm46dCBJp4xpqLULiTXubWS/photo_of_mblaze_%20%282%29.jpeg"' \ -i 'prompt="aesthetically structured room, moody vibe, soft natural light"' \ -i 'output_format="webp"' \ -i 'output_quality=95' \ -i 'negative_prompt=""'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/adriiita/photoshoot@sha256:91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "image": "https://replicate.delivery/pbxt/MhZaVa2AahPV3rjGoUZp7ipHvTm46dCBJp4xpqLULiTXubWS/photo_of_mblaze_%20%282%29.jpeg", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2025-03-22T04:34:09.428125Z", "created_at": "2025-03-22T04:31:57.319000Z", "data_removed": false, "error": null, "id": "h8yy661qgxrj00cnqgfsscx460", "input": { "image": "https://replicate.delivery/pbxt/MhZaVa2AahPV3rjGoUZp7ipHvTm46dCBJp4xpqLULiTXubWS/photo_of_mblaze_%20%282%29.jpeg", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" }, "logs": "Random seed set to: 1260131595\nChecking inputs\n✅ /tmp/inputs/image.jpeg\n====================================\nChecking weights\n✅ realisticVisionV60B1_v51HyperVAE.safetensors exists in ComfyUI/models/checkpoints\n✅ iclight_sd15_fc_unet_ldm.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 4, title: Unknown, class type: CheckpointLoaderSimple\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type EPS\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16\n[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16\nExecuting node 78, title: Unknown, class type: LoadImage\nExecuting node 14, title: Unknown, class type: ImageResize+\nExecuting node 12, title: Unknown, class type: easy imageRemBg\nExecuting node 47, title: Unknown, class type: SplitImageWithAlpha\nExecuting node 42, title: Unknown, class type: EmptyLatentImage\n[ComfyUI] Requested to load AutoencoderKL\nExecuting node 43, title: Unknown, class type: VAEDecode\n[ComfyUI] loaded completely 75603.489112854 159.55708122253418 True\nExecuting node 46, title: Unknown, class type: ImageCompositeMasked\nExecuting node 38, title: Unknown, class type: PreviewImage\nExecuting node 37, title: Unknown, class type: VAEEncodeArgMax\nExecuting node 7, title: Unknown, class type: CLIPTextEncode\n[ComfyUI] Requested to load SD1ClipModel\n[ComfyUI] loaded completely 78972.73203163147 235.84423828125 True\n[ComfyUI] FETCH ComfyRegistry Data: 5/79\nExecuting node 6, title: Unknown, class type: CLIPTextEncode\nExecuting node 41, title: Unknown, class type: ICLightApplyMaskGrey\nExecuting node 24, title: Unknown, class type: VAEEncodeArgMax\nExecuting node 23, title: Unknown, class type: UNETLoader\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type EPS\nExecuting node 40, title: Unknown, class type: ICLightAppply\nExecuting node 58, title: Unknown, class type: easy ipadapterApply\n[ComfyUI] \u001b[1m\u001b[36m[EasyUse] easy ipadapterApply:\u001b[0m Using ClipVisonModel open_clip_model.safetensors\n[ComfyUI] \u001b[1m\u001b[36m[EasyUse] easy ipadapterApply:\u001b[0m Using IpAdapterModel ip-adapter-plus_sd15.safetensors\n[ComfyUI] Requested to load CLIPVisionModelProjection\n[ComfyUI] loaded completely 77081.3346813202 1208.09814453125 True\nExecuting node 59, title: Unknown, class type: PreviewImage\nExecuting node 16, title: Unknown, class type: KSampler\n[ComfyUI] Requested to load BaseModel\n[ComfyUI] Pad weight diffusion_model.input_blocks.0.0.weight from torch.Size([320, 4, 3, 3]) to shape: torch.Size([320, 8, 3, 3])\n[ComfyUI] loaded completely 75769.56016044617 1639.406135559082 True\n[ComfyUI]\n[ComfyUI] 0%| | 0/25 [00:00<?, ?it/s]\n[ComfyUI] 4%|▍ | 1/25 [00:00<00:03, 6.07it/s]\n[ComfyUI] 8%|▊ | 2/25 [00:00<00:03, 6.74it/s]\n[ComfyUI] 12%|█▏ | 3/25 [00:00<00:03, 6.88it/s]\n[ComfyUI] 16%|█▌ | 4/25 [00:00<00:03, 6.96it/s]\n[ComfyUI] 20%|██ | 5/25 [00:00<00:02, 6.97it/s]\n[ComfyUI] FETCH ComfyRegistry Data: 10/79\n[ComfyUI] 24%|██▍ | 6/25 [00:00<00:02, 6.98it/s]\n[ComfyUI] 28%|██▊ | 7/25 [00:01<00:02, 7.03it/s]\n[ComfyUI] 32%|███▏ | 8/25 [00:01<00:02, 7.04it/s]\n[ComfyUI] 36%|███▌ | 9/25 [00:01<00:02, 7.04it/s]\n[ComfyUI] 40%|████ | 10/25 [00:01<00:02, 7.12it/s]\n[ComfyUI] 44%|████▍ | 11/25 [00:01<00:01, 7.11it/s]\n[ComfyUI] 48%|████▊ | 12/25 [00:01<00:01, 7.15it/s]\n[ComfyUI] 52%|█████▏ | 13/25 [00:01<00:01, 7.11it/s]\n[ComfyUI] 56%|█████▌ | 14/25 [00:01<00:01, 7.15it/s]\n[ComfyUI] 60%|██████ | 15/25 [00:02<00:01, 7.21it/s]\n[ComfyUI] 64%|██████▍ | 16/25 [00:02<00:01, 7.34it/s]\n[ComfyUI] 68%|██████▊ | 17/25 [00:02<00:01, 7.38it/s]\n[ComfyUI] 72%|███████▏ | 18/25 [00:02<00:00, 7.39it/s]\n[ComfyUI] 76%|███████▌ | 19/25 [00:02<00:00, 7.36it/s]\n[ComfyUI] 80%|████████ | 20/25 [00:02<00:00, 7.35it/s]\n[ComfyUI] 84%|████████▍ | 21/25 [00:02<00:00, 7.35it/s]\n[ComfyUI] 88%|████████▊ | 22/25 [00:03<00:00, 7.37it/s]\n[ComfyUI] 92%|█████████▏| 23/25 [00:03<00:00, 7.37it/s]\n[ComfyUI] 96%|█████████▌| 24/25 [00:03<00:00, 7.63it/s]\n[ComfyUI] 100%|██████████| 25/25 [00:03<00:00, 8.16it/s]\n[ComfyUI] 100%|██████████| 25/25 [00:03<00:00, 7.29it/s]\nExecuting node 17, title: Unknown, class type: VAEDecode\nExecuting node 61, title: Unknown, class type: PreviewImage\nExecuting node 51, title: Unknown, class type: DetailTransfer\nExecuting node 67, title: Unknown, class type: SaveImage\n[ComfyUI] Prompt executed in 8.30 seconds\noutputs: {'59': {'images': [{'filename': 'ComfyUI_temp_kccco_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '61': {'images': [{'filename': 'ComfyUI_temp_rfyar_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '12': {'images': [{'filename': 'easyPreview_temp_gyqba_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '38': {'images': [{'filename': 'ComfyUI_temp_vozny_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '67': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\nComfyUI_00001_.png\n[ComfyUI] FETCH ComfyRegistry Data: 15/79", "metrics": { "predict_time": 8.965127028, "total_time": 132.109125 }, "output": [ "https://replicate.delivery/yhqm/fPojmOwncD3iVqSLTLlecGcfIpBbFrhp9aY7Jx6lre9F87rRB/ComfyUI_00001_.webp" ], "started_at": "2025-03-22T04:34:00.462998Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/yswh-mwh7v4lrfcc4zk3lyrapfvm64yd25igmefwau6ex3b7ifzrveiaa", "get": "https://api.replicate.com/v1/predictions/h8yy661qgxrj00cnqgfsscx460", "cancel": "https://api.replicate.com/v1/predictions/h8yy661qgxrj00cnqgfsscx460/cancel" }, "version": "91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6" }
Generated inRandom seed set to: 1260131595 Checking inputs ✅ /tmp/inputs/image.jpeg ==================================== Checking weights ✅ realisticVisionV60B1_v51HyperVAE.safetensors exists in ComfyUI/models/checkpoints ✅ iclight_sd15_fc_unet_ldm.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 4, title: Unknown, class type: CheckpointLoaderSimple [ComfyUI] model weight dtype torch.float16, manual cast: None [ComfyUI] model_type EPS [ComfyUI] Using pytorch attention in VAE [ComfyUI] Using pytorch attention in VAE [ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 [ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 Executing node 78, title: Unknown, class type: LoadImage Executing node 14, title: Unknown, class type: ImageResize+ Executing node 12, title: Unknown, class type: easy imageRemBg Executing node 47, title: Unknown, class type: SplitImageWithAlpha Executing node 42, title: Unknown, class type: EmptyLatentImage [ComfyUI] Requested to load AutoencoderKL Executing node 43, title: Unknown, class type: VAEDecode [ComfyUI] loaded completely 75603.489112854 159.55708122253418 True Executing node 46, title: Unknown, class type: ImageCompositeMasked Executing node 38, title: Unknown, class type: PreviewImage Executing node 37, title: Unknown, class type: VAEEncodeArgMax Executing node 7, title: Unknown, class type: CLIPTextEncode [ComfyUI] Requested to load SD1ClipModel [ComfyUI] loaded completely 78972.73203163147 235.84423828125 True [ComfyUI] FETCH ComfyRegistry Data: 5/79 Executing node 6, title: Unknown, class type: CLIPTextEncode Executing node 41, title: Unknown, class type: ICLightApplyMaskGrey Executing node 24, title: Unknown, class type: VAEEncodeArgMax Executing node 23, title: Unknown, class type: UNETLoader [ComfyUI] model weight dtype torch.float16, manual cast: None [ComfyUI] model_type EPS Executing node 40, title: Unknown, class type: ICLightAppply Executing node 58, title: Unknown, class type: easy ipadapterApply [ComfyUI] [EasyUse] easy ipadapterApply: Using ClipVisonModel open_clip_model.safetensors [ComfyUI] [EasyUse] easy ipadapterApply: Using IpAdapterModel ip-adapter-plus_sd15.safetensors [ComfyUI] Requested to load CLIPVisionModelProjection [ComfyUI] loaded completely 77081.3346813202 1208.09814453125 True Executing node 59, title: Unknown, class type: PreviewImage Executing node 16, title: Unknown, class type: KSampler [ComfyUI] Requested to load BaseModel [ComfyUI] Pad weight diffusion_model.input_blocks.0.0.weight from torch.Size([320, 4, 3, 3]) to shape: torch.Size([320, 8, 3, 3]) [ComfyUI] loaded completely 75769.56016044617 1639.406135559082 True [ComfyUI] [ComfyUI] 0%| | 0/25 [00:00<?, ?it/s] [ComfyUI] 4%|▍ | 1/25 [00:00<00:03, 6.07it/s] [ComfyUI] 8%|▊ | 2/25 [00:00<00:03, 6.74it/s] [ComfyUI] 12%|█▏ | 3/25 [00:00<00:03, 6.88it/s] [ComfyUI] 16%|█▌ | 4/25 [00:00<00:03, 6.96it/s] [ComfyUI] 20%|██ | 5/25 [00:00<00:02, 6.97it/s] [ComfyUI] FETCH ComfyRegistry Data: 10/79 [ComfyUI] 24%|██▍ | 6/25 [00:00<00:02, 6.98it/s] [ComfyUI] 28%|██▊ | 7/25 [00:01<00:02, 7.03it/s] [ComfyUI] 32%|███▏ | 8/25 [00:01<00:02, 7.04it/s] [ComfyUI] 36%|███▌ | 9/25 [00:01<00:02, 7.04it/s] [ComfyUI] 40%|████ | 10/25 [00:01<00:02, 7.12it/s] [ComfyUI] 44%|████▍ | 11/25 [00:01<00:01, 7.11it/s] [ComfyUI] 48%|████▊ | 12/25 [00:01<00:01, 7.15it/s] [ComfyUI] 52%|█████▏ | 13/25 [00:01<00:01, 7.11it/s] [ComfyUI] 56%|█████▌ | 14/25 [00:01<00:01, 7.15it/s] [ComfyUI] 60%|██████ | 15/25 [00:02<00:01, 7.21it/s] [ComfyUI] 64%|██████▍ | 16/25 [00:02<00:01, 7.34it/s] [ComfyUI] 68%|██████▊ | 17/25 [00:02<00:01, 7.38it/s] [ComfyUI] 72%|███████▏ | 18/25 [00:02<00:00, 7.39it/s] [ComfyUI] 76%|███████▌ | 19/25 [00:02<00:00, 7.36it/s] [ComfyUI] 80%|████████ | 20/25 [00:02<00:00, 7.35it/s] [ComfyUI] 84%|████████▍ | 21/25 [00:02<00:00, 7.35it/s] [ComfyUI] 88%|████████▊ | 22/25 [00:03<00:00, 7.37it/s] [ComfyUI] 92%|█████████▏| 23/25 [00:03<00:00, 7.37it/s] [ComfyUI] 96%|█████████▌| 24/25 [00:03<00:00, 7.63it/s] [ComfyUI] 100%|██████████| 25/25 [00:03<00:00, 8.16it/s] [ComfyUI] 100%|██████████| 25/25 [00:03<00:00, 7.29it/s] Executing node 17, title: Unknown, class type: VAEDecode Executing node 61, title: Unknown, class type: PreviewImage Executing node 51, title: Unknown, class type: DetailTransfer Executing node 67, title: Unknown, class type: SaveImage [ComfyUI] Prompt executed in 8.30 seconds outputs: {'59': {'images': [{'filename': 'ComfyUI_temp_kccco_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '61': {'images': [{'filename': 'ComfyUI_temp_rfyar_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '12': {'images': [{'filename': 'easyPreview_temp_gyqba_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '38': {'images': [{'filename': 'ComfyUI_temp_vozny_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '67': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== ComfyUI_00001_.png [ComfyUI] FETCH ComfyRegistry Data: 15/79
Prediction
adriiita/photoshoot:91185a74ID9k26e8gz71rj20cnqgc9znxwt4StatusSucceededSourceWebHardwareA100 (80GB)Total durationCreatedInput
- prompt
- aesthetically structured room, moody vibe, soft natural light
- output_format
- webp
- output_quality
- 95
- negative_prompt
{ "image": "https://replicate.delivery/pbxt/MhZTAFwZr706tRaaaMoK2LPmqlcTbRfjrLVYR6EEiplNZM1u/Taurus-C100-Lite-Executive-Office-Chair-Cellbell-1675073705.webp", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run adriiita/photoshoot using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "adriiita/photoshoot:91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6", { input: { image: "https://replicate.delivery/pbxt/MhZTAFwZr706tRaaaMoK2LPmqlcTbRfjrLVYR6EEiplNZM1u/Taurus-C100-Lite-Executive-Office-Chair-Cellbell-1675073705.webp", prompt: "aesthetically structured room, moody vibe, soft natural light", output_format: "webp", output_quality: 95, negative_prompt: "" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run adriiita/photoshoot using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "adriiita/photoshoot:91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6", input={ "image": "https://replicate.delivery/pbxt/MhZTAFwZr706tRaaaMoK2LPmqlcTbRfjrLVYR6EEiplNZM1u/Taurus-C100-Lite-Executive-Office-Chair-Cellbell-1675073705.webp", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run adriiita/photoshoot using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6", "input": { "image": "https://replicate.delivery/pbxt/MhZTAFwZr706tRaaaMoK2LPmqlcTbRfjrLVYR6EEiplNZM1u/Taurus-C100-Lite-Executive-Office-Chair-Cellbell-1675073705.webp", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/adriiita/photoshoot@sha256:91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6 \ -i 'image="https://replicate.delivery/pbxt/MhZTAFwZr706tRaaaMoK2LPmqlcTbRfjrLVYR6EEiplNZM1u/Taurus-C100-Lite-Executive-Office-Chair-Cellbell-1675073705.webp"' \ -i 'prompt="aesthetically structured room, moody vibe, soft natural light"' \ -i 'output_format="webp"' \ -i 'output_quality=95' \ -i 'negative_prompt=""'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/adriiita/photoshoot@sha256:91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "image": "https://replicate.delivery/pbxt/MhZTAFwZr706tRaaaMoK2LPmqlcTbRfjrLVYR6EEiplNZM1u/Taurus-C100-Lite-Executive-Office-Chair-Cellbell-1675073705.webp", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2025-03-22T04:27:23.538286Z", "created_at": "2025-03-22T04:24:12.344000Z", "data_removed": false, "error": null, "id": "9k26e8gz71rj20cnqgc9znxwt4", "input": { "image": "https://replicate.delivery/pbxt/MhZTAFwZr706tRaaaMoK2LPmqlcTbRfjrLVYR6EEiplNZM1u/Taurus-C100-Lite-Executive-Office-Chair-Cellbell-1675073705.webp", "prompt": "aesthetically structured room, moody vibe, soft natural light", "output_format": "webp", "output_quality": 95, "negative_prompt": "" }, "logs": "Random seed set to: 1550447645\nChecking inputs\n✅ /tmp/inputs/image.webp\n====================================\nChecking weights\n✅ realisticVisionV60B1_v51HyperVAE.safetensors exists in ComfyUI/models/checkpoints\n✅ iclight_sd15_fc_unet_ldm.safetensors exists in ComfyUI/models/diffusion_models\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 78, title: Unknown, class type: LoadImage\nExecuting node 14, title: Unknown, class type: ImageResize+\nExecuting node 12, title: Unknown, class type: easy imageRemBg\nExecuting node 47, title: Unknown, class type: SplitImageWithAlpha\nExecuting node 4, title: Unknown, class type: CheckpointLoaderSimple\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type EPS\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16\n[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16\nExecuting node 42, title: Unknown, class type: EmptyLatentImage\n[ComfyUI] Requested to load AutoencoderKL\nExecuting node 43, title: Unknown, class type: VAEDecode\n[ComfyUI] loaded completely 75603.489112854 159.55708122253418 True\nExecuting node 46, title: Unknown, class type: ImageCompositeMasked\nExecuting node 38, title: Unknown, class type: PreviewImage\nExecuting node 37, title: Unknown, class type: VAEEncodeArgMax\nExecuting node 7, title: Unknown, class type: CLIPTextEncode\n[ComfyUI] Requested to load SD1ClipModel\n[ComfyUI] loaded completely 78972.73203163147 235.84423828125 True\n[ComfyUI] FETCH ComfyRegistry Data: 5/79\nExecuting node 6, title: Unknown, class type: CLIPTextEncode\nExecuting node 41, title: Unknown, class type: ICLightApplyMaskGrey\nExecuting node 24, title: Unknown, class type: VAEEncodeArgMax\nExecuting node 23, title: Unknown, class type: UNETLoader\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type EPS\nExecuting node 40, title: Unknown, class type: ICLightAppply\nExecuting node 58, title: Unknown, class type: easy ipadapterApply\n[ComfyUI] \u001b[1m\u001b[36m[EasyUse] easy ipadapterApply:\u001b[0m Using ClipVisonModel open_clip_model.safetensors\n[ComfyUI] \u001b[1m\u001b[36m[EasyUse] easy ipadapterApply:\u001b[0m Using IpAdapterModel ip-adapter-plus_sd15.safetensors\n[ComfyUI] Requested to load CLIPVisionModelProjection\n[ComfyUI] loaded completely 77081.3346813202 1208.09814453125 True\nExecuting node 59, title: Unknown, class type: PreviewImage\nExecuting node 16, title: Unknown, class type: KSampler\n[ComfyUI] Requested to load BaseModel\n[ComfyUI] Pad weight diffusion_model.input_blocks.0.0.weight from torch.Size([320, 4, 3, 3]) to shape: torch.Size([320, 8, 3, 3])\n[ComfyUI] loaded completely 75769.56016044617 1639.406135559082 True\n[ComfyUI]\n[ComfyUI] 0%| | 0/25 [00:00<?, ?it/s]\n[ComfyUI] 4%|▍ | 1/25 [00:00<00:04, 5.78it/s]\n[ComfyUI] 8%|▊ | 2/25 [00:00<00:03, 6.57it/s]\n[ComfyUI] 12%|█▏ | 3/25 [00:00<00:03, 6.75it/s]\n[ComfyUI] FETCH ComfyRegistry Data: 10/79\n[ComfyUI] 16%|█▌ | 4/25 [00:00<00:03, 6.88it/s]\n[ComfyUI] 20%|██ | 5/25 [00:00<00:02, 6.92it/s]\n[ComfyUI] 24%|██▍ | 6/25 [00:00<00:02, 6.98it/s]\n[ComfyUI] 28%|██▊ | 7/25 [00:01<00:02, 6.95it/s]\n[ComfyUI] 32%|███▏ | 8/25 [00:01<00:02, 6.92it/s]\n[ComfyUI] 36%|███▌ | 9/25 [00:01<00:02, 6.92it/s]\n[ComfyUI] 40%|████ | 10/25 [00:01<00:02, 7.05it/s]\n[ComfyUI] 44%|████▍ | 11/25 [00:01<00:01, 7.08it/s]\n[ComfyUI] 48%|████▊ | 12/25 [00:01<00:01, 7.12it/s]\n[ComfyUI] 52%|█████▏ | 13/25 [00:01<00:01, 7.06it/s]\n[ComfyUI] 56%|█████▌ | 14/25 [00:02<00:01, 7.08it/s]\n[ComfyUI] 60%|██████ | 15/25 [00:02<00:01, 7.17it/s]\n[ComfyUI] 64%|██████▍ | 16/25 [00:02<00:01, 7.30it/s]\n[ComfyUI] 68%|██████▊ | 17/25 [00:02<00:01, 7.27it/s]\n[ComfyUI] 72%|███████▏ | 18/25 [00:02<00:00, 7.32it/s]\n[ComfyUI] 76%|███████▌ | 19/25 [00:02<00:00, 7.31it/s]\n[ComfyUI] 80%|████████ | 20/25 [00:02<00:00, 7.32it/s]\n[ComfyUI] 84%|████████▍ | 21/25 [00:02<00:00, 7.34it/s]\n[ComfyUI] 88%|████████▊ | 22/25 [00:03<00:00, 7.33it/s]\n[ComfyUI] 92%|█████████▏| 23/25 [00:03<00:00, 7.31it/s]\n[ComfyUI] 96%|█████████▌| 24/25 [00:03<00:00, 6.48it/s]\n[ComfyUI] 100%|██████████| 25/25 [00:03<00:00, 7.21it/s]\n[ComfyUI] 100%|██████████| 25/25 [00:03<00:00, 7.07it/s]\nExecuting node 17, title: Unknown, class type: VAEDecode\nExecuting node 61, title: Unknown, class type: PreviewImage\n[ComfyUI] FETCH ComfyRegistry Data: 15/79\nExecuting node 51, title: Unknown, class type: DetailTransfer\nExecuting node 67, title: Unknown, class type: SaveImage\n[ComfyUI] Prompt executed in 8.52 seconds\noutputs: {'38': {'images': [{'filename': 'ComfyUI_temp_shaxp_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '12': {'images': [{'filename': 'easyPreview_temp_bande_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '59': {'images': [{'filename': 'ComfyUI_temp_pihou_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '61': {'images': [{'filename': 'ComfyUI_temp_vevts_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '67': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\nComfyUI_00001_.png", "metrics": { "predict_time": 9.159542388, "total_time": 191.194286 }, "output": [ "https://replicate.delivery/yhqm/uSJ6CNWJgArXGVwxDDXYXmjYkuT2TYyxgIITyEFXL81KuvGF/ComfyUI_00001_.webp" ], "started_at": "2025-03-22T04:27:14.378744Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/yswh-zplpi2yzlktru6xmxxcu5d4vxb6ucypzuaasrpjbtsqp45glxjba", "get": "https://api.replicate.com/v1/predictions/9k26e8gz71rj20cnqgc9znxwt4", "cancel": "https://api.replicate.com/v1/predictions/9k26e8gz71rj20cnqgc9znxwt4/cancel" }, "version": "91185a74aadd618ef2fc49fb7a7fdb8a00c46c2d9855dd97015207db433f9ea6" }
Generated inRandom seed set to: 1550447645 Checking inputs ✅ /tmp/inputs/image.webp ==================================== Checking weights ✅ realisticVisionV60B1_v51HyperVAE.safetensors exists in ComfyUI/models/checkpoints ✅ iclight_sd15_fc_unet_ldm.safetensors exists in ComfyUI/models/diffusion_models ==================================== Running workflow [ComfyUI] got prompt Executing node 78, title: Unknown, class type: LoadImage Executing node 14, title: Unknown, class type: ImageResize+ Executing node 12, title: Unknown, class type: easy imageRemBg Executing node 47, title: Unknown, class type: SplitImageWithAlpha Executing node 4, title: Unknown, class type: CheckpointLoaderSimple [ComfyUI] model weight dtype torch.float16, manual cast: None [ComfyUI] model_type EPS [ComfyUI] Using pytorch attention in VAE [ComfyUI] Using pytorch attention in VAE [ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 [ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 Executing node 42, title: Unknown, class type: EmptyLatentImage [ComfyUI] Requested to load AutoencoderKL Executing node 43, title: Unknown, class type: VAEDecode [ComfyUI] loaded completely 75603.489112854 159.55708122253418 True Executing node 46, title: Unknown, class type: ImageCompositeMasked Executing node 38, title: Unknown, class type: PreviewImage Executing node 37, title: Unknown, class type: VAEEncodeArgMax Executing node 7, title: Unknown, class type: CLIPTextEncode [ComfyUI] Requested to load SD1ClipModel [ComfyUI] loaded completely 78972.73203163147 235.84423828125 True [ComfyUI] FETCH ComfyRegistry Data: 5/79 Executing node 6, title: Unknown, class type: CLIPTextEncode Executing node 41, title: Unknown, class type: ICLightApplyMaskGrey Executing node 24, title: Unknown, class type: VAEEncodeArgMax Executing node 23, title: Unknown, class type: UNETLoader [ComfyUI] model weight dtype torch.float16, manual cast: None [ComfyUI] model_type EPS Executing node 40, title: Unknown, class type: ICLightAppply Executing node 58, title: Unknown, class type: easy ipadapterApply [ComfyUI] [EasyUse] easy ipadapterApply: Using ClipVisonModel open_clip_model.safetensors [ComfyUI] [EasyUse] easy ipadapterApply: Using IpAdapterModel ip-adapter-plus_sd15.safetensors [ComfyUI] Requested to load CLIPVisionModelProjection [ComfyUI] loaded completely 77081.3346813202 1208.09814453125 True Executing node 59, title: Unknown, class type: PreviewImage Executing node 16, title: Unknown, class type: KSampler [ComfyUI] Requested to load BaseModel [ComfyUI] Pad weight diffusion_model.input_blocks.0.0.weight from torch.Size([320, 4, 3, 3]) to shape: torch.Size([320, 8, 3, 3]) [ComfyUI] loaded completely 75769.56016044617 1639.406135559082 True [ComfyUI] [ComfyUI] 0%| | 0/25 [00:00<?, ?it/s] [ComfyUI] 4%|▍ | 1/25 [00:00<00:04, 5.78it/s] [ComfyUI] 8%|▊ | 2/25 [00:00<00:03, 6.57it/s] [ComfyUI] 12%|█▏ | 3/25 [00:00<00:03, 6.75it/s] [ComfyUI] FETCH ComfyRegistry Data: 10/79 [ComfyUI] 16%|█▌ | 4/25 [00:00<00:03, 6.88it/s] [ComfyUI] 20%|██ | 5/25 [00:00<00:02, 6.92it/s] [ComfyUI] 24%|██▍ | 6/25 [00:00<00:02, 6.98it/s] [ComfyUI] 28%|██▊ | 7/25 [00:01<00:02, 6.95it/s] [ComfyUI] 32%|███▏ | 8/25 [00:01<00:02, 6.92it/s] [ComfyUI] 36%|███▌ | 9/25 [00:01<00:02, 6.92it/s] [ComfyUI] 40%|████ | 10/25 [00:01<00:02, 7.05it/s] [ComfyUI] 44%|████▍ | 11/25 [00:01<00:01, 7.08it/s] [ComfyUI] 48%|████▊ | 12/25 [00:01<00:01, 7.12it/s] [ComfyUI] 52%|█████▏ | 13/25 [00:01<00:01, 7.06it/s] [ComfyUI] 56%|█████▌ | 14/25 [00:02<00:01, 7.08it/s] [ComfyUI] 60%|██████ | 15/25 [00:02<00:01, 7.17it/s] [ComfyUI] 64%|██████▍ | 16/25 [00:02<00:01, 7.30it/s] [ComfyUI] 68%|██████▊ | 17/25 [00:02<00:01, 7.27it/s] [ComfyUI] 72%|███████▏ | 18/25 [00:02<00:00, 7.32it/s] [ComfyUI] 76%|███████▌ | 19/25 [00:02<00:00, 7.31it/s] [ComfyUI] 80%|████████ | 20/25 [00:02<00:00, 7.32it/s] [ComfyUI] 84%|████████▍ | 21/25 [00:02<00:00, 7.34it/s] [ComfyUI] 88%|████████▊ | 22/25 [00:03<00:00, 7.33it/s] [ComfyUI] 92%|█████████▏| 23/25 [00:03<00:00, 7.31it/s] [ComfyUI] 96%|█████████▌| 24/25 [00:03<00:00, 6.48it/s] [ComfyUI] 100%|██████████| 25/25 [00:03<00:00, 7.21it/s] [ComfyUI] 100%|██████████| 25/25 [00:03<00:00, 7.07it/s] Executing node 17, title: Unknown, class type: VAEDecode Executing node 61, title: Unknown, class type: PreviewImage [ComfyUI] FETCH ComfyRegistry Data: 15/79 Executing node 51, title: Unknown, class type: DetailTransfer Executing node 67, title: Unknown, class type: SaveImage [ComfyUI] Prompt executed in 8.52 seconds outputs: {'38': {'images': [{'filename': 'ComfyUI_temp_shaxp_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '12': {'images': [{'filename': 'easyPreview_temp_bande_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '59': {'images': [{'filename': 'ComfyUI_temp_pihou_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '61': {'images': [{'filename': 'ComfyUI_temp_vevts_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '67': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== ComfyUI_00001_.png
Want to make some of these yourself?
Run this model