bryantanjw / entropy-lol
LoRA + Iterative 4x Upscale ComfyUI Workflow
- Public
- 3.1K runs
-
A100 (80GB)
- GitHub
Prediction
bryantanjw/entropy-lol:8b6facbe498adbdb66ec6b13740b6a2b56d57a01ddaadd26b63f85de03e98b80IDf7y56ddbjdcffw2kgj4qajzibqStatusSucceededSourceWebHardwareA40Total durationCreatedInput
- cfg
- 3
- lora
- anime/2B.safetensors
- steps
- 30
- width
- 720
- height
- 1080
- batch_size
- 4
- input_prompt
- masterpiece, best quality, highres, hm2b, black blindfold, covered eyes, mole under mouth, clothing cutout, long sleeves, puffy sleeves, juliet sleeves, feather trim, black thighhighs, black gloves, black dress, black skirt, outdoor, grass, building, ruins, field, standing, cowboy shot,
- sampler_name
- dpmpp_2m
- lora_strength
- 1
- negative_prompt
- lowres, bad anatomy, bad hands, text, error, missing fingers, (worst quality, low quality:1.4), monochrome, zombie, (interlocked fingers:1.2)
- checkpoint_model
- Era.safetensors
{ "cfg": 3, "lora": "anime/2B.safetensors", "steps": 30, "width": 720, "height": 1080, "batch_size": 4, "input_prompt": "masterpiece, best quality, highres, hm2b, black blindfold, covered eyes, mole under mouth, clothing cutout, long sleeves, puffy sleeves, juliet sleeves, feather trim, black thighhighs, black gloves, black dress, black skirt, outdoor, grass, building, ruins, field, standing, cowboy shot,", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "lowres, bad anatomy, bad hands, text, error, missing fingers, (worst quality, low quality:1.4), monochrome, zombie, (interlocked fingers:1.2)", "checkpoint_model": "Era.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:8b6facbe498adbdb66ec6b13740b6a2b56d57a01ddaadd26b63f85de03e98b80", { input: { cfg: 3, lora: "anime/2B.safetensors", steps: 30, width: 720, height: 1080, batch_size: 4, input_prompt: "masterpiece, best quality, highres, hm2b, black blindfold, covered eyes, mole under mouth, clothing cutout, long sleeves, puffy sleeves, juliet sleeves, feather trim, black thighhighs, black gloves, black dress, black skirt, outdoor, grass, building, ruins, field, standing, cowboy shot,", sampler_name: "dpmpp_2m", lora_strength: 1, negative_prompt: "lowres, bad anatomy, bad hands, text, error, missing fingers, (worst quality, low quality:1.4), monochrome, zombie, (interlocked fingers:1.2)", checkpoint_model: "Era.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:8b6facbe498adbdb66ec6b13740b6a2b56d57a01ddaadd26b63f85de03e98b80", input={ "cfg": 3, "lora": "anime/2B.safetensors", "steps": 30, "width": 720, "height": 1080, "batch_size": 4, "input_prompt": "masterpiece, best quality, highres, hm2b, black blindfold, covered eyes, mole under mouth, clothing cutout, long sleeves, puffy sleeves, juliet sleeves, feather trim, black thighhighs, black gloves, black dress, black skirt, outdoor, grass, building, ruins, field, standing, cowboy shot,", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "lowres, bad anatomy, bad hands, text, error, missing fingers, (worst quality, low quality:1.4), monochrome, zombie, (interlocked fingers:1.2)", "checkpoint_model": "Era.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:8b6facbe498adbdb66ec6b13740b6a2b56d57a01ddaadd26b63f85de03e98b80", "input": { "cfg": 3, "lora": "anime/2B.safetensors", "steps": 30, "width": 720, "height": 1080, "batch_size": 4, "input_prompt": "masterpiece, best quality, highres, hm2b, black blindfold, covered eyes, mole under mouth, clothing cutout, long sleeves, puffy sleeves, juliet sleeves, feather trim, black thighhighs, black gloves, black dress, black skirt, outdoor, grass, building, ruins, field, standing, cowboy shot,", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "lowres, bad anatomy, bad hands, text, error, missing fingers, (worst quality, low quality:1.4), monochrome, zombie, (interlocked fingers:1.2)", "checkpoint_model": "Era.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-01-23T02:49:03.906204Z", "created_at": "2024-01-23T02:48:35.339393Z", "data_removed": false, "error": null, "id": "f7y56ddbjdcffw2kgj4qajzibq", "input": { "cfg": 3, "lora": "anime/2B.safetensors", "steps": 30, "width": 720, "height": 1080, "batch_size": 4, "input_prompt": "masterpiece, best quality, highres, hm2b, black blindfold, covered eyes, mole under mouth, clothing cutout, long sleeves, puffy sleeves, juliet sleeves, feather trim, black thighhighs, black gloves, black dress, black skirt, outdoor, grass, building, ruins, field, standing, cowboy shot,", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "lowres, bad anatomy, bad hands, text, error, missing fingers, (worst quality, low quality:1.4), monochrome, zombie, (interlocked fingers:1.2)", "checkpoint_model": "Era.safetensors" }, "logs": "Using seed: 16548799\ngot prompt\nmodel_type EPS\nadm 0\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nmissing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}\nleft over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])\nRequested to load SD1ClipModel\nLoading 1 new model\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/30 [00:00<?, ?it/s]\n 3%|▎ | 1/30 [00:00<00:03, 9.64it/s]\n 7%|▋ | 2/30 [00:00<00:15, 1.82it/s]\n 10%|█ | 3/30 [00:01<00:14, 1.91it/s]\n 13%|█▎ | 4/30 [00:01<00:13, 1.96it/s]\n 17%|█▋ | 5/30 [00:02<00:12, 1.98it/s]\n 20%|██ | 6/30 [00:02<00:11, 2.00it/s]\n 23%|██▎ | 7/30 [00:03<00:11, 2.01it/s]\n 27%|██▋ | 8/30 [00:03<00:10, 2.03it/s]\n 30%|███ | 9/30 [00:04<00:10, 2.03it/s]\n 33%|███▎ | 10/30 [00:04<00:09, 2.04it/s]\n 37%|███▋ | 11/30 [00:05<00:09, 2.04it/s]\n 40%|████ | 12/30 [00:05<00:08, 2.04it/s]\n 43%|████▎ | 13/30 [00:06<00:08, 2.03it/s]\n 47%|████▋ | 14/30 [00:06<00:07, 2.03it/s]\n 50%|█████ | 15/30 [00:07<00:07, 2.03it/s]\n 53%|█████▎ | 16/30 [00:07<00:06, 2.04it/s]\n 57%|█████▋ | 17/30 [00:08<00:06, 2.04it/s]\n 60%|██████ | 18/30 [00:08<00:05, 2.04it/s]\n 63%|██████▎ | 19/30 [00:09<00:05, 2.04it/s]\n 67%|██████▋ | 20/30 [00:09<00:04, 2.04it/s]\n 70%|███████ | 21/30 [00:10<00:04, 2.04it/s]\n 73%|███████▎ | 22/30 [00:10<00:03, 2.04it/s]\n 77%|███████▋ | 23/30 [00:11<00:03, 2.04it/s]\n 80%|████████ | 24/30 [00:11<00:02, 2.04it/s]\n 83%|████████▎ | 25/30 [00:12<00:02, 2.04it/s]\n 87%|████████▋ | 26/30 [00:12<00:01, 2.03it/s]\n 90%|█████████ | 27/30 [00:13<00:01, 2.03it/s]\n 93%|█████████▎| 28/30 [00:13<00:01, 1.99it/s]\n 97%|█████████▋| 29/30 [00:14<00:00, 2.00it/s]\n100%|██████████| 30/30 [00:14<00:00, 2.01it/s]\n100%|██████████| 30/30 [00:14<00:00, 2.03it/s]\nPrompt executed in 24.17 seconds\nnode output: {'images': [{'filename': 'ComfyUI_00029_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00030_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00031_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00032_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\noutput\noutput\n4 images generated successfully", "metrics": { "predict_time": 28.513054, "total_time": 28.566811 }, "output": [ "https://replicate.delivery/pbxt/DfH4fvCIazrEREfGIdgLW5k7QjaSHblyapomQA2XZH36YbeIB/out-0.png", "https://replicate.delivery/pbxt/gMXLFKooUMbsEhPy7QJiZdja9pRpEjvxRCYEdgo3rHpHbzjE/out-1.png", "https://replicate.delivery/pbxt/pZvmfHMgyj0ehE63vm3NX7iUQ2yXXAtPyZBEp3Oy330fYbeIB/out-2.png", "https://replicate.delivery/pbxt/BYOig7QIqtJZBRHam5eZGdjO6Yv7uxb0Tvysdd1RFB2P2mHJA/out-3.png" ], "started_at": "2024-01-23T02:48:35.393150Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/f7y56ddbjdcffw2kgj4qajzibq", "cancel": "https://api.replicate.com/v1/predictions/f7y56ddbjdcffw2kgj4qajzibq/cancel" }, "version": "8b6facbe498adbdb66ec6b13740b6a2b56d57a01ddaadd26b63f85de03e98b80" }
Generated inUsing seed: 16548799 got prompt model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Requested to load SD1ClipModel Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/30 [00:00<?, ?it/s] 3%|▎ | 1/30 [00:00<00:03, 9.64it/s] 7%|▋ | 2/30 [00:00<00:15, 1.82it/s] 10%|█ | 3/30 [00:01<00:14, 1.91it/s] 13%|█▎ | 4/30 [00:01<00:13, 1.96it/s] 17%|█▋ | 5/30 [00:02<00:12, 1.98it/s] 20%|██ | 6/30 [00:02<00:11, 2.00it/s] 23%|██▎ | 7/30 [00:03<00:11, 2.01it/s] 27%|██▋ | 8/30 [00:03<00:10, 2.03it/s] 30%|███ | 9/30 [00:04<00:10, 2.03it/s] 33%|███▎ | 10/30 [00:04<00:09, 2.04it/s] 37%|███▋ | 11/30 [00:05<00:09, 2.04it/s] 40%|████ | 12/30 [00:05<00:08, 2.04it/s] 43%|████▎ | 13/30 [00:06<00:08, 2.03it/s] 47%|████▋ | 14/30 [00:06<00:07, 2.03it/s] 50%|█████ | 15/30 [00:07<00:07, 2.03it/s] 53%|█████▎ | 16/30 [00:07<00:06, 2.04it/s] 57%|█████▋ | 17/30 [00:08<00:06, 2.04it/s] 60%|██████ | 18/30 [00:08<00:05, 2.04it/s] 63%|██████▎ | 19/30 [00:09<00:05, 2.04it/s] 67%|██████▋ | 20/30 [00:09<00:04, 2.04it/s] 70%|███████ | 21/30 [00:10<00:04, 2.04it/s] 73%|███████▎ | 22/30 [00:10<00:03, 2.04it/s] 77%|███████▋ | 23/30 [00:11<00:03, 2.04it/s] 80%|████████ | 24/30 [00:11<00:02, 2.04it/s] 83%|████████▎ | 25/30 [00:12<00:02, 2.04it/s] 87%|████████▋ | 26/30 [00:12<00:01, 2.03it/s] 90%|█████████ | 27/30 [00:13<00:01, 2.03it/s] 93%|█████████▎| 28/30 [00:13<00:01, 1.99it/s] 97%|█████████▋| 29/30 [00:14<00:00, 2.00it/s] 100%|██████████| 30/30 [00:14<00:00, 2.01it/s] 100%|██████████| 30/30 [00:14<00:00, 2.03it/s] Prompt executed in 24.17 seconds node output: {'images': [{'filename': 'ComfyUI_00029_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00030_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00031_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00032_.png', 'subfolder': '', 'type': 'output'}]} output output output output 4 images generated successfully
Prediction
bryantanjw/entropy-lol:8b6facbe498adbdb66ec6b13740b6a2b56d57a01ddaadd26b63f85de03e98b80ID7penhm3bv3l7fd7aybaivyzhiyStatusSucceededSourceAPIHardwareA40Total durationCreatedInput
- cfg
- 6
- lora
- gaming/Ahri.safetensors
- seed
- 0
- steps
- 25
- width
- 720
- height
- 1080
- batch_size
- 4
- custom_lora
- input_prompt
- masterpiece, (detailed, highres, best quality), 1girl, IncrsAhri, braid, fox tail, multiple tails, korean clothes, skirt, blurry, blurry background, arms behind back, seductive smile,
- sampler_name
- dpmpp_2m
- lora_strength
- 1
- negative_prompt
- (worst quality:1.4), (low quality:1.4)
- checkpoint_model
- Aniverse.safetensors
{ "cfg": 6, "lora": "gaming/Ahri.safetensors", "seed": 0, "steps": 25, "width": 720, "height": 1080, "batch_size": 4, "custom_lora": "", "input_prompt": "masterpiece, (detailed, highres, best quality), 1girl, IncrsAhri, braid, fox tail, multiple tails, korean clothes, skirt, blurry, blurry background, arms behind back, seductive smile,", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "Aniverse.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:8b6facbe498adbdb66ec6b13740b6a2b56d57a01ddaadd26b63f85de03e98b80", { input: { cfg: 6, lora: "gaming/Ahri.safetensors", seed: 0, steps: 25, width: 720, height: 1080, batch_size: 4, custom_lora: "", input_prompt: "masterpiece, (detailed, highres, best quality), 1girl, IncrsAhri, braid, fox tail, multiple tails, korean clothes, skirt, blurry, blurry background, arms behind back, seductive smile,", sampler_name: "dpmpp_2m", lora_strength: 1, negative_prompt: "(worst quality:1.4), (low quality:1.4)", checkpoint_model: "Aniverse.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:8b6facbe498adbdb66ec6b13740b6a2b56d57a01ddaadd26b63f85de03e98b80", input={ "cfg": 6, "lora": "gaming/Ahri.safetensors", "seed": 0, "steps": 25, "width": 720, "height": 1080, "batch_size": 4, "custom_lora": "", "input_prompt": "masterpiece, (detailed, highres, best quality), 1girl, IncrsAhri, braid, fox tail, multiple tails, korean clothes, skirt, blurry, blurry background, arms behind back, seductive smile,", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "Aniverse.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:8b6facbe498adbdb66ec6b13740b6a2b56d57a01ddaadd26b63f85de03e98b80", "input": { "cfg": 6, "lora": "gaming/Ahri.safetensors", "seed": 0, "steps": 25, "width": 720, "height": 1080, "batch_size": 4, "custom_lora": "", "input_prompt": "masterpiece, (detailed, highres, best quality), 1girl, IncrsAhri, braid, fox tail, multiple tails, korean clothes, skirt, blurry, blurry background, arms behind back, seductive smile,", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "Aniverse.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-01-22T18:54:13.327350Z", "created_at": "2024-01-22T18:47:46.476575Z", "data_removed": false, "error": null, "id": "7penhm3bv3l7fd7aybaivyzhiy", "input": { "cfg": 6, "lora": "gaming/Ahri.safetensors", "seed": 0, "steps": 25, "width": 720, "height": 1080, "batch_size": 4, "custom_lora": "", "input_prompt": "masterpiece, (detailed, highres, best quality), 1girl, IncrsAhri, braid, fox tail, multiple tails, korean clothes, skirt, blurry, blurry background, arms behind back, seductive smile,", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "Aniverse.safetensors" }, "logs": "Using seed: 2802535\ngot prompt\nmodel_type EPS\nadm 0\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nmissing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}\nleft over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])\nRequested to load SD1ClipModel\nLoading 1 new model\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/25 [00:00<?, ?it/s]\n 4%|▍ | 1/25 [00:00<00:22, 1.05it/s]\n 8%|▊ | 2/25 [00:01<00:15, 1.46it/s]\n 12%|█▏ | 3/25 [00:01<00:13, 1.68it/s]\n 16%|█▌ | 4/25 [00:02<00:11, 1.80it/s]\n 20%|██ | 5/25 [00:02<00:10, 1.88it/s]\n 24%|██▍ | 6/25 [00:03<00:09, 1.92it/s]\n 28%|██▊ | 7/25 [00:03<00:09, 1.96it/s]\n 32%|███▏ | 8/25 [00:04<00:08, 1.98it/s]\n 36%|███▌ | 9/25 [00:04<00:08, 1.99it/s]\n 40%|████ | 10/25 [00:05<00:07, 2.00it/s]\n 44%|████▍ | 11/25 [00:05<00:06, 2.01it/s]\n 48%|████▊ | 12/25 [00:06<00:06, 2.01it/s]\n 52%|█████▏ | 13/25 [00:06<00:05, 2.02it/s]\n 56%|█████▌ | 14/25 [00:07<00:05, 2.03it/s]\n 60%|██████ | 15/25 [00:07<00:04, 2.01it/s]\n 64%|██████▍ | 16/25 [00:08<00:04, 2.02it/s]\n 68%|██████▊ | 17/25 [00:08<00:03, 2.03it/s]\n 72%|███████▏ | 18/25 [00:09<00:03, 2.03it/s]\n 76%|███████▌ | 19/25 [00:09<00:02, 2.03it/s]\n 80%|████████ | 20/25 [00:10<00:02, 2.03it/s]\n 84%|████████▍ | 21/25 [00:10<00:01, 2.03it/s]\n 88%|████████▊ | 22/25 [00:11<00:01, 2.03it/s]\n 92%|█████████▏| 23/25 [00:11<00:00, 2.04it/s]\n 96%|█████████▌| 24/25 [00:12<00:00, 2.04it/s]\n100%|██████████| 25/25 [00:12<00:00, 2.03it/s]\n100%|██████████| 25/25 [00:12<00:00, 1.96it/s]\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nLeftover VAE keys ['model_ema.decay', 'model_ema.num_updates']\nRequested to load AutoencoderKL\nLoading 1 new model\nPrompt executed in 30.71 seconds\nnode output: {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00002_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00003_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00004_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\noutput\noutput\n4 images generated successfully", "metrics": { "predict_time": 35.890803, "total_time": 386.850775 }, "output": [ "https://replicate.delivery/pbxt/ApmmfnUbLoU3eU0PZBAueJCmbGWcJqGFmCLJkfefdWup0rxjE/out-0.png", "https://replicate.delivery/pbxt/zYtKRoFI3mZ2BJXHpr5HdnKmNWuUG2uKj9LIUX3bXxy0rxjE/out-1.png", "https://replicate.delivery/pbxt/x5pDJd1Zrj6IP1vJFItGT5x48YR0yzVLcliQdVkv4JF1rxjE/out-2.png", "https://replicate.delivery/pbxt/2ImENQkRwaIvEdewhNVcg2VLff2RBfUcf1SZ8eGSeX5KqXjHJA/out-3.png" ], "started_at": "2024-01-22T18:53:37.436547Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/7penhm3bv3l7fd7aybaivyzhiy", "cancel": "https://api.replicate.com/v1/predictions/7penhm3bv3l7fd7aybaivyzhiy/cancel" }, "version": "8b6facbe498adbdb66ec6b13740b6a2b56d57a01ddaadd26b63f85de03e98b80" }
Generated inUsing seed: 2802535 got prompt model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Requested to load SD1ClipModel Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/25 [00:00<?, ?it/s] 4%|▍ | 1/25 [00:00<00:22, 1.05it/s] 8%|▊ | 2/25 [00:01<00:15, 1.46it/s] 12%|█▏ | 3/25 [00:01<00:13, 1.68it/s] 16%|█▌ | 4/25 [00:02<00:11, 1.80it/s] 20%|██ | 5/25 [00:02<00:10, 1.88it/s] 24%|██▍ | 6/25 [00:03<00:09, 1.92it/s] 28%|██▊ | 7/25 [00:03<00:09, 1.96it/s] 32%|███▏ | 8/25 [00:04<00:08, 1.98it/s] 36%|███▌ | 9/25 [00:04<00:08, 1.99it/s] 40%|████ | 10/25 [00:05<00:07, 2.00it/s] 44%|████▍ | 11/25 [00:05<00:06, 2.01it/s] 48%|████▊ | 12/25 [00:06<00:06, 2.01it/s] 52%|█████▏ | 13/25 [00:06<00:05, 2.02it/s] 56%|█████▌ | 14/25 [00:07<00:05, 2.03it/s] 60%|██████ | 15/25 [00:07<00:04, 2.01it/s] 64%|██████▍ | 16/25 [00:08<00:04, 2.02it/s] 68%|██████▊ | 17/25 [00:08<00:03, 2.03it/s] 72%|███████▏ | 18/25 [00:09<00:03, 2.03it/s] 76%|███████▌ | 19/25 [00:09<00:02, 2.03it/s] 80%|████████ | 20/25 [00:10<00:02, 2.03it/s] 84%|████████▍ | 21/25 [00:10<00:01, 2.03it/s] 88%|████████▊ | 22/25 [00:11<00:01, 2.03it/s] 92%|█████████▏| 23/25 [00:11<00:00, 2.04it/s] 96%|█████████▌| 24/25 [00:12<00:00, 2.04it/s] 100%|██████████| 25/25 [00:12<00:00, 2.03it/s] 100%|██████████| 25/25 [00:12<00:00, 1.96it/s] Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE Leftover VAE keys ['model_ema.decay', 'model_ema.num_updates'] Requested to load AutoencoderKL Loading 1 new model Prompt executed in 30.71 seconds node output: {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00002_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00003_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00004_.png', 'subfolder': '', 'type': 'output'}]} output output output output 4 images generated successfully
Prediction
bryantanjw/entropy-lol:c5ee23596b3f22ba9a58242c0ae34d264e4eb6f599386a1577b37ed0243a1870IDfn324albkv2zpceoa6a4k4qpoyStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- cfg
- 7
- lora
- gaming/KDA_All_Out_Ahri.safetensors
- seed
- 0
- steps
- 30
- width
- 720
- height
- 1080
- batch_size
- 4
- custom_lora
- input_prompt
- A photo of a girl, IncrsAhriKDAAllOut, whisker mar…hiny hair, shiny skin, shiny clothes, masterpiece
- sampler_name
- dpmpp_2m
- lora_strength
- 1
- negative_prompt
- (worst quality:1.4), (low quality:1.4), bad fingers
- checkpoint_model
- UnleashedDiffusion.safetensors
{ "cfg": 7, "lora": "gaming/KDA_All_Out_Ahri.safetensors", "seed": 0, "steps": 30, "width": 720, "height": 1080, "batch_size": 4, "custom_lora": "", "input_prompt": "A photo of a girl, IncrsAhriKDAAllOut, whisker mar…hiny hair, shiny skin, shiny clothes, masterpiece", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "(worst quality:1.4), (low quality:1.4), bad fingers", "checkpoint_model": "UnleashedDiffusion.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:c5ee23596b3f22ba9a58242c0ae34d264e4eb6f599386a1577b37ed0243a1870", { input: { cfg: 7, lora: "gaming/KDA_All_Out_Ahri.safetensors", seed: 0, steps: 30, width: 720, height: 1080, batch_size: 4, custom_lora: "", input_prompt: "A photo of a girl, IncrsAhriKDAAllOut, whisker mar…hiny hair, shiny skin, shiny clothes, masterpiece", sampler_name: "dpmpp_2m", lora_strength: 1, negative_prompt: "(worst quality:1.4), (low quality:1.4), bad fingers", checkpoint_model: "UnleashedDiffusion.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:c5ee23596b3f22ba9a58242c0ae34d264e4eb6f599386a1577b37ed0243a1870", input={ "cfg": 7, "lora": "gaming/KDA_All_Out_Ahri.safetensors", "seed": 0, "steps": 30, "width": 720, "height": 1080, "batch_size": 4, "custom_lora": "", "input_prompt": "A photo of a girl, IncrsAhriKDAAllOut, whisker mar…hiny hair, shiny skin, shiny clothes, masterpiece", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "(worst quality:1.4), (low quality:1.4), bad fingers", "checkpoint_model": "UnleashedDiffusion.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:c5ee23596b3f22ba9a58242c0ae34d264e4eb6f599386a1577b37ed0243a1870", "input": { "cfg": 7, "lora": "gaming/KDA_All_Out_Ahri.safetensors", "seed": 0, "steps": 30, "width": 720, "height": 1080, "batch_size": 4, "custom_lora": "", "input_prompt": "A photo of a girl, IncrsAhriKDAAllOut, whisker mar…hiny hair, shiny skin, shiny clothes, masterpiece", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "(worst quality:1.4), (low quality:1.4), bad fingers", "checkpoint_model": "UnleashedDiffusion.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-01-21T20:15:55.317246Z", "created_at": "2024-01-21T20:15:34.925629Z", "data_removed": false, "error": null, "id": "fn324albkv2zpceoa6a4k4qpoy", "input": { "cfg": 7, "lora": "gaming/KDA_All_Out_Ahri.safetensors", "seed": 0, "steps": 30, "width": 720, "height": 1080, "batch_size": 4, "custom_lora": "", "input_prompt": "A photo of a girl, IncrsAhriKDAAllOut, whisker mar…hiny hair, shiny skin, shiny clothes, masterpiece", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "(worst quality:1.4), (low quality:1.4), bad fingers", "checkpoint_model": "UnleashedDiffusion.safetensors" }, "logs": "Using seed: 0\ngot prompt\n 0%| | 0/30 [00:00<?, ?it/s]\n 3%|▎ | 1/30 [00:00<00:03, 9.65it/s]\n 7%|▋ | 2/30 [00:00<00:15, 1.79it/s]\n 10%|█ | 3/30 [00:01<00:14, 1.89it/s]\n 13%|█▎ | 4/30 [00:01<00:13, 1.94it/s]\n 17%|█▋ | 5/30 [00:02<00:12, 1.97it/s]\n 20%|██ | 6/30 [00:02<00:12, 1.99it/s]\n 23%|██▎ | 7/30 [00:03<00:11, 2.00it/s]\n 27%|██▋ | 8/30 [00:03<00:10, 2.01it/s]\n 30%|███ | 9/30 [00:04<00:10, 2.01it/s]\n 33%|███▎ | 10/30 [00:04<00:09, 2.01it/s]\n 37%|███▋ | 11/30 [00:05<00:09, 2.01it/s]\n 40%|████ | 12/30 [00:05<00:09, 1.99it/s]\n 43%|████▎ | 13/30 [00:06<00:08, 2.00it/s]\n 47%|████▋ | 14/30 [00:06<00:07, 2.01it/s]\n 50%|█████ | 15/30 [00:07<00:07, 2.01it/s]\n 53%|█████▎ | 16/30 [00:07<00:06, 2.01it/s]\n 57%|█████▋ | 17/30 [00:08<00:06, 2.01it/s]\n 60%|██████ | 18/30 [00:08<00:05, 2.02it/s]\n 63%|██████▎ | 19/30 [00:09<00:05, 2.02it/s]\n 67%|██████▋ | 20/30 [00:09<00:04, 2.02it/s]\n 70%|███████ | 21/30 [00:10<00:04, 2.02it/s]\n 73%|███████▎ | 22/30 [00:10<00:03, 2.02it/s]\n 77%|███████▋ | 23/30 [00:11<00:03, 2.02it/s]\n 80%|████████ | 24/30 [00:11<00:02, 2.02it/s]\n 83%|████████▎ | 25/30 [00:12<00:02, 2.02it/s]\n 87%|████████▋ | 26/30 [00:12<00:01, 2.01it/s]\n 90%|█████████ | 27/30 [00:13<00:01, 2.02it/s]\n 93%|█████████▎| 28/30 [00:13<00:00, 2.02it/s]\n 97%|█████████▋| 29/30 [00:14<00:00, 2.02it/s]\n100%|██████████| 30/30 [00:14<00:00, 2.02it/s]\n100%|██████████| 30/30 [00:14<00:00, 2.02it/s]\nPrompt executed in 15.95 seconds\nnode output: {'images': [{'filename': 'ComfyUI_00285_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00286_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00287_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00288_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\noutput\noutput\n4 images generated successfully", "metrics": { "predict_time": 20.354146, "total_time": 20.391617 }, "output": [ "https://replicate.delivery/pbxt/SGzoITe7ShTHbapJbgfAuOw4MnDzmebZh0vVfAZqcoHhXL7IB/out-0.png", "https://replicate.delivery/pbxt/y1vuCfIOOoWQDa8st3bbYtVOZdeKIFZEZlb0cb1h9zfzrldkA/out-1.png", "https://replicate.delivery/pbxt/c7dlru2EnK6PNh8aByosfdp6vW2Xe5DLeRh36IHOeTGqXL7IB/out-2.png", "https://replicate.delivery/pbxt/xvjp4co7chZ3IREohBgNbhfbH4u6ehO08hpwg5JBuCP71yOSA/out-3.png" ], "started_at": "2024-01-21T20:15:34.963100Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/fn324albkv2zpceoa6a4k4qpoy", "cancel": "https://api.replicate.com/v1/predictions/fn324albkv2zpceoa6a4k4qpoy/cancel" }, "version": "c5ee23596b3f22ba9a58242c0ae34d264e4eb6f599386a1577b37ed0243a1870" }
Generated inUsing seed: 0 got prompt 0%| | 0/30 [00:00<?, ?it/s] 3%|▎ | 1/30 [00:00<00:03, 9.65it/s] 7%|▋ | 2/30 [00:00<00:15, 1.79it/s] 10%|█ | 3/30 [00:01<00:14, 1.89it/s] 13%|█▎ | 4/30 [00:01<00:13, 1.94it/s] 17%|█▋ | 5/30 [00:02<00:12, 1.97it/s] 20%|██ | 6/30 [00:02<00:12, 1.99it/s] 23%|██▎ | 7/30 [00:03<00:11, 2.00it/s] 27%|██▋ | 8/30 [00:03<00:10, 2.01it/s] 30%|███ | 9/30 [00:04<00:10, 2.01it/s] 33%|███▎ | 10/30 [00:04<00:09, 2.01it/s] 37%|███▋ | 11/30 [00:05<00:09, 2.01it/s] 40%|████ | 12/30 [00:05<00:09, 1.99it/s] 43%|████▎ | 13/30 [00:06<00:08, 2.00it/s] 47%|████▋ | 14/30 [00:06<00:07, 2.01it/s] 50%|█████ | 15/30 [00:07<00:07, 2.01it/s] 53%|█████▎ | 16/30 [00:07<00:06, 2.01it/s] 57%|█████▋ | 17/30 [00:08<00:06, 2.01it/s] 60%|██████ | 18/30 [00:08<00:05, 2.02it/s] 63%|██████▎ | 19/30 [00:09<00:05, 2.02it/s] 67%|██████▋ | 20/30 [00:09<00:04, 2.02it/s] 70%|███████ | 21/30 [00:10<00:04, 2.02it/s] 73%|███████▎ | 22/30 [00:10<00:03, 2.02it/s] 77%|███████▋ | 23/30 [00:11<00:03, 2.02it/s] 80%|████████ | 24/30 [00:11<00:02, 2.02it/s] 83%|████████▎ | 25/30 [00:12<00:02, 2.02it/s] 87%|████████▋ | 26/30 [00:12<00:01, 2.01it/s] 90%|█████████ | 27/30 [00:13<00:01, 2.02it/s] 93%|█████████▎| 28/30 [00:13<00:00, 2.02it/s] 97%|█████████▋| 29/30 [00:14<00:00, 2.02it/s] 100%|██████████| 30/30 [00:14<00:00, 2.02it/s] 100%|██████████| 30/30 [00:14<00:00, 2.02it/s] Prompt executed in 15.95 seconds node output: {'images': [{'filename': 'ComfyUI_00285_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00286_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00287_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00288_.png', 'subfolder': '', 'type': 'output'}]} output output output output 4 images generated successfully
Prediction
bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9IDpnj2ek3bvh2ksuyuf6dva57dgmStatusSucceededSourceWebHardwareA100 (40GB)Total durationCreatedInput
- cfg
- 6
- lora
- gaming/KDA_All_Out_Ahri.safetensors
- seed
- 0
- steps
- 20
- width
- 360
- height
- 540
- batch_size
- 3
- custom_lora
- input_prompt
- A photo of a girl, IncrsAhriKDAAllOut, whisker mar…hiny hair, shiny skin, shiny clothes, masterpiece
- sampler_name
- dpmpp_2m
- lora_strength
- 1
- upscale_factor
- 3
- negative_prompt
- (worst quality:1.4), (low quality:1.4)
- checkpoint_model
- UnleashedDiffusion.safetensors
{ "cfg": 6, "lora": "gaming/KDA_All_Out_Ahri.safetensors", "seed": 0, "steps": 20, "width": 360, "height": 540, "batch_size": 3, "custom_lora": "", "input_prompt": "A photo of a girl, IncrsAhriKDAAllOut, whisker mar…hiny hair, shiny skin, shiny clothes, masterpiece", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "UnleashedDiffusion.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9", { input: { cfg: 6, lora: "gaming/KDA_All_Out_Ahri.safetensors", seed: 0, steps: 20, width: 360, height: 540, batch_size: 3, custom_lora: "", input_prompt: "A photo of a girl, IncrsAhriKDAAllOut, whisker mar…hiny hair, shiny skin, shiny clothes, masterpiece", sampler_name: "dpmpp_2m", lora_strength: 1, upscale_factor: 3, negative_prompt: "(worst quality:1.4), (low quality:1.4)", checkpoint_model: "UnleashedDiffusion.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9", input={ "cfg": 6, "lora": "gaming/KDA_All_Out_Ahri.safetensors", "seed": 0, "steps": 20, "width": 360, "height": 540, "batch_size": 3, "custom_lora": "", "input_prompt": "A photo of a girl, IncrsAhriKDAAllOut, whisker mar…hiny hair, shiny skin, shiny clothes, masterpiece", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "UnleashedDiffusion.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9", "input": { "cfg": 6, "lora": "gaming/KDA_All_Out_Ahri.safetensors", "seed": 0, "steps": 20, "width": 360, "height": 540, "batch_size": 3, "custom_lora": "", "input_prompt": "A photo of a girl, IncrsAhriKDAAllOut, whisker mar…hiny hair, shiny skin, shiny clothes, masterpiece", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "UnleashedDiffusion.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-01-28T21:17:59.677278Z", "created_at": "2024-01-28T21:17:11.129605Z", "data_removed": false, "error": null, "id": "pnj2ek3bvh2ksuyuf6dva57dgm", "input": { "cfg": 6, "lora": "gaming/KDA_All_Out_Ahri.safetensors", "seed": 0, "steps": 20, "width": 360, "height": 540, "batch_size": 3, "custom_lora": "", "input_prompt": "A photo of a girl, IncrsAhriKDAAllOut, whisker mar…hiny hair, shiny skin, shiny clothes, masterpiece", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "UnleashedDiffusion.safetensors" }, "logs": "Using seed: 6988351\ngot prompt\nmodel_type EPS\nadm 0\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nmissing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}\nleft over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])\nRequested to load SD1ClipModel\nLoading 1 new model\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/20 [00:00<?, ?it/s]\n 10%|█ | 2/20 [00:00<00:01, 14.94it/s]\n 20%|██ | 4/20 [00:00<00:01, 15.26it/s]\n 30%|███ | 6/20 [00:00<00:00, 15.60it/s]\n 40%|████ | 8/20 [00:00<00:00, 15.58it/s]\n 50%|█████ | 10/20 [00:00<00:00, 15.53it/s]\n 60%|██████ | 12/20 [00:00<00:00, 15.29it/s]\n 70%|███████ | 14/20 [00:00<00:00, 13.86it/s]\n 80%|████████ | 16/20 [00:01<00:00, 13.67it/s]\n 90%|█████████ | 18/20 [00:01<00:00, 14.24it/s]\n100%|██████████| 20/20 [00:01<00:00, 13.61it/s]\n100%|██████████| 20/20 [00:01<00:00, 14.36it/s]\nIterativeLatentUpscale[1/3]: 600.0x893.3 (scale:1.67) \n 0%| | 0/20 [00:00<?, ?it/s]\n 10%|█ | 2/20 [00:00<00:02, 8.98it/s]\n 15%|█▌ | 3/20 [00:00<00:02, 7.53it/s]\n 20%|██ | 4/20 [00:00<00:02, 6.96it/s]\n 25%|██▌ | 5/20 [00:00<00:02, 6.63it/s]\n 30%|███ | 6/20 [00:00<00:02, 6.48it/s]\n 35%|███▌ | 7/20 [00:01<00:02, 6.35it/s]\n 40%|████ | 8/20 [00:01<00:01, 6.31it/s]\n 45%|████▌ | 9/20 [00:01<00:01, 6.24it/s]\n 50%|█████ | 10/20 [00:01<00:01, 6.23it/s]\n 55%|█████▌ | 11/20 [00:01<00:01, 6.11it/s]\n 60%|██████ | 12/20 [00:01<00:01, 6.13it/s]\n 65%|██████▌ | 13/20 [00:02<00:01, 6.13it/s]\n 70%|███████ | 14/20 [00:02<00:00, 6.15it/s]\n 75%|███████▌ | 15/20 [00:02<00:00, 6.13it/s]\n 80%|████████ | 16/20 [00:02<00:00, 6.14it/s]\n 85%|████████▌ | 17/20 [00:02<00:00, 6.12it/s]\n 90%|█████████ | 18/20 [00:02<00:00, 6.15it/s]\n 95%|█████████▌| 19/20 [00:02<00:00, 6.16it/s]\n100%|██████████| 20/20 [00:03<00:00, 6.16it/s]\n100%|██████████| 20/20 [00:03<00:00, 6.33it/s]\nIterativeLatentUpscale[2/3]: 840.0x1250.7 (scale:2.33)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:02, 9.38it/s]\n 10%|█ | 2/20 [00:00<00:04, 3.90it/s]\n 15%|█▌ | 3/20 [00:00<00:05, 3.25it/s]\n 20%|██ | 4/20 [00:01<00:05, 3.04it/s]\n 25%|██▌ | 5/20 [00:01<00:05, 2.94it/s]\n 30%|███ | 6/20 [00:01<00:04, 2.88it/s]\n 35%|███▌ | 7/20 [00:02<00:04, 2.84it/s]\n 40%|████ | 8/20 [00:02<00:04, 2.82it/s]\n 45%|████▌ | 9/20 [00:03<00:03, 2.80it/s]\n 50%|█████ | 10/20 [00:03<00:03, 2.79it/s]\n 55%|█████▌ | 11/20 [00:03<00:03, 2.78it/s]\n 60%|██████ | 12/20 [00:04<00:02, 2.78it/s]\n 65%|██████▌ | 13/20 [00:04<00:02, 2.78it/s]\n 70%|███████ | 14/20 [00:04<00:02, 2.77it/s]\n 75%|███████▌ | 15/20 [00:05<00:01, 2.77it/s]\n 80%|████████ | 16/20 [00:05<00:01, 2.77it/s]\n 85%|████████▌ | 17/20 [00:05<00:01, 2.77it/s]\n 90%|█████████ | 18/20 [00:06<00:00, 2.77it/s]\n 95%|█████████▌| 19/20 [00:06<00:00, 2.77it/s]\n100%|██████████| 20/20 [00:06<00:00, 2.76it/s]\n100%|██████████| 20/20 [00:06<00:00, 2.86it/s]\nIterativeLatentUpscale[Final]: 1080.0x1608.0 (scale:3.00)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:05, 3.72it/s]\n 10%|█ | 2/20 [00:01<00:11, 1.56it/s]\n 15%|█▌ | 3/20 [00:02<00:12, 1.32it/s]\n 20%|██ | 4/20 [00:02<00:13, 1.23it/s]\n 25%|██▌ | 5/20 [00:03<00:12, 1.18it/s]\n 30%|███ | 6/20 [00:04<00:12, 1.16it/s]\n 35%|███▌ | 7/20 [00:05<00:11, 1.14it/s]\n 40%|████ | 8/20 [00:06<00:10, 1.13it/s]\n 45%|████▌ | 9/20 [00:07<00:09, 1.13it/s]\n 50%|█████ | 10/20 [00:08<00:08, 1.12it/s]\n 55%|█████▌ | 11/20 [00:09<00:08, 1.12it/s]\n 60%|██████ | 12/20 [00:10<00:07, 1.12it/s]\n 65%|██████▌ | 13/20 [00:11<00:06, 1.11it/s]\n 70%|███████ | 14/20 [00:11<00:05, 1.11it/s]\n 75%|███████▌ | 15/20 [00:12<00:04, 1.11it/s]\n 80%|████████ | 16/20 [00:13<00:03, 1.11it/s]\n 85%|████████▌ | 17/20 [00:14<00:02, 1.11it/s]\n 90%|█████████ | 18/20 [00:15<00:01, 1.11it/s]\n 95%|█████████▌| 19/20 [00:16<00:00, 1.11it/s]\n100%|██████████| 20/20 [00:17<00:00, 1.11it/s]\n100%|██████████| 20/20 [00:17<00:00, 1.15it/s]\nPrompt executed in 44.69 seconds\nnode output: {'images': [{'filename': 'ComfyUI_temp_nfmhu_00010_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_nfmhu_00011_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_nfmhu_00012_.png', 'subfolder': '', 'type': 'temp'}]}\ntemp\ntemp\ntemp\nnode output: {'images': [{'filename': 'ComfyUI_00054_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00055_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00056_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\noutput\n3 images generated successfully", "metrics": { "predict_time": 48.528632, "total_time": 48.547673 }, "output": [ "https://replicate.delivery/pbxt/4cwX65v0kobyNxdT2qjAzbemXfeKMgM3K4HgRpl1I8eYodEJB/out-0.png", "https://replicate.delivery/pbxt/5GiPOAUFxxKVJxb0PDCe2ZS7ebtbSIfIMXb2rrc1VmGP0OikA/out-1.png", "https://replicate.delivery/pbxt/bk5B0O2hzLoVA9rXGONlK1mLa02TVgRsDkjAoPMtBNzh2RkE/out-2.png" ], "started_at": "2024-01-28T21:17:11.148646Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/pnj2ek3bvh2ksuyuf6dva57dgm", "cancel": "https://api.replicate.com/v1/predictions/pnj2ek3bvh2ksuyuf6dva57dgm/cancel" }, "version": "58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9" }
Generated inUsing seed: 6988351 got prompt model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Requested to load SD1ClipModel Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s] 10%|█ | 2/20 [00:00<00:01, 14.94it/s] 20%|██ | 4/20 [00:00<00:01, 15.26it/s] 30%|███ | 6/20 [00:00<00:00, 15.60it/s] 40%|████ | 8/20 [00:00<00:00, 15.58it/s] 50%|█████ | 10/20 [00:00<00:00, 15.53it/s] 60%|██████ | 12/20 [00:00<00:00, 15.29it/s] 70%|███████ | 14/20 [00:00<00:00, 13.86it/s] 80%|████████ | 16/20 [00:01<00:00, 13.67it/s] 90%|█████████ | 18/20 [00:01<00:00, 14.24it/s] 100%|██████████| 20/20 [00:01<00:00, 13.61it/s] 100%|██████████| 20/20 [00:01<00:00, 14.36it/s] IterativeLatentUpscale[1/3]: 600.0x893.3 (scale:1.67) 0%| | 0/20 [00:00<?, ?it/s] 10%|█ | 2/20 [00:00<00:02, 8.98it/s] 15%|█▌ | 3/20 [00:00<00:02, 7.53it/s] 20%|██ | 4/20 [00:00<00:02, 6.96it/s] 25%|██▌ | 5/20 [00:00<00:02, 6.63it/s] 30%|███ | 6/20 [00:00<00:02, 6.48it/s] 35%|███▌ | 7/20 [00:01<00:02, 6.35it/s] 40%|████ | 8/20 [00:01<00:01, 6.31it/s] 45%|████▌ | 9/20 [00:01<00:01, 6.24it/s] 50%|█████ | 10/20 [00:01<00:01, 6.23it/s] 55%|█████▌ | 11/20 [00:01<00:01, 6.11it/s] 60%|██████ | 12/20 [00:01<00:01, 6.13it/s] 65%|██████▌ | 13/20 [00:02<00:01, 6.13it/s] 70%|███████ | 14/20 [00:02<00:00, 6.15it/s] 75%|███████▌ | 15/20 [00:02<00:00, 6.13it/s] 80%|████████ | 16/20 [00:02<00:00, 6.14it/s] 85%|████████▌ | 17/20 [00:02<00:00, 6.12it/s] 90%|█████████ | 18/20 [00:02<00:00, 6.15it/s] 95%|█████████▌| 19/20 [00:02<00:00, 6.16it/s] 100%|██████████| 20/20 [00:03<00:00, 6.16it/s] 100%|██████████| 20/20 [00:03<00:00, 6.33it/s] IterativeLatentUpscale[2/3]: 840.0x1250.7 (scale:2.33) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:02, 9.38it/s] 10%|█ | 2/20 [00:00<00:04, 3.90it/s] 15%|█▌ | 3/20 [00:00<00:05, 3.25it/s] 20%|██ | 4/20 [00:01<00:05, 3.04it/s] 25%|██▌ | 5/20 [00:01<00:05, 2.94it/s] 30%|███ | 6/20 [00:01<00:04, 2.88it/s] 35%|███▌ | 7/20 [00:02<00:04, 2.84it/s] 40%|████ | 8/20 [00:02<00:04, 2.82it/s] 45%|████▌ | 9/20 [00:03<00:03, 2.80it/s] 50%|█████ | 10/20 [00:03<00:03, 2.79it/s] 55%|█████▌ | 11/20 [00:03<00:03, 2.78it/s] 60%|██████ | 12/20 [00:04<00:02, 2.78it/s] 65%|██████▌ | 13/20 [00:04<00:02, 2.78it/s] 70%|███████ | 14/20 [00:04<00:02, 2.77it/s] 75%|███████▌ | 15/20 [00:05<00:01, 2.77it/s] 80%|████████ | 16/20 [00:05<00:01, 2.77it/s] 85%|████████▌ | 17/20 [00:05<00:01, 2.77it/s] 90%|█████████ | 18/20 [00:06<00:00, 2.77it/s] 95%|█████████▌| 19/20 [00:06<00:00, 2.77it/s] 100%|██████████| 20/20 [00:06<00:00, 2.76it/s] 100%|██████████| 20/20 [00:06<00:00, 2.86it/s] IterativeLatentUpscale[Final]: 1080.0x1608.0 (scale:3.00) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:05, 3.72it/s] 10%|█ | 2/20 [00:01<00:11, 1.56it/s] 15%|█▌ | 3/20 [00:02<00:12, 1.32it/s] 20%|██ | 4/20 [00:02<00:13, 1.23it/s] 25%|██▌ | 5/20 [00:03<00:12, 1.18it/s] 30%|███ | 6/20 [00:04<00:12, 1.16it/s] 35%|███▌ | 7/20 [00:05<00:11, 1.14it/s] 40%|████ | 8/20 [00:06<00:10, 1.13it/s] 45%|████▌ | 9/20 [00:07<00:09, 1.13it/s] 50%|█████ | 10/20 [00:08<00:08, 1.12it/s] 55%|█████▌ | 11/20 [00:09<00:08, 1.12it/s] 60%|██████ | 12/20 [00:10<00:07, 1.12it/s] 65%|██████▌ | 13/20 [00:11<00:06, 1.11it/s] 70%|███████ | 14/20 [00:11<00:05, 1.11it/s] 75%|███████▌ | 15/20 [00:12<00:04, 1.11it/s] 80%|████████ | 16/20 [00:13<00:03, 1.11it/s] 85%|████████▌ | 17/20 [00:14<00:02, 1.11it/s] 90%|█████████ | 18/20 [00:15<00:01, 1.11it/s] 95%|█████████▌| 19/20 [00:16<00:00, 1.11it/s] 100%|██████████| 20/20 [00:17<00:00, 1.11it/s] 100%|██████████| 20/20 [00:17<00:00, 1.15it/s] Prompt executed in 44.69 seconds node output: {'images': [{'filename': 'ComfyUI_temp_nfmhu_00010_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_nfmhu_00011_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_nfmhu_00012_.png', 'subfolder': '', 'type': 'temp'}]} temp temp temp node output: {'images': [{'filename': 'ComfyUI_00054_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00055_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00056_.png', 'subfolder': '', 'type': 'output'}]} output output output 3 images generated successfully
Prediction
bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9IDr4ii5ctbxgdrvvzmq6ppee4truStatusSucceededSourceWebHardwareA100 (40GB)Total durationCreatedInput
- cfg
- 7
- lora
- gaming/Battle_Bunny_Riven.safetensors
- steps
- 20
- width
- 340
- height
- 512
- batch_size
- 3
- input_prompt
- battle bunny riven, 1girl, strapless leotard, pantyhose, rabbit ears, folded ponytail, white hair, necktie, wrist cuffs, sitting on sofa, legs up, elegant room, best quality, masterpiece
- sampler_name
- dpmpp_2m
- lora_strength
- 1
- upscale_factor
- 3
- negative_prompt
- (worst quality:1.4), (low quality:1.4), simple background
- checkpoint_model
- Pastel.safetensors
{ "cfg": 7, "lora": "gaming/Battle_Bunny_Riven.safetensors", "steps": 20, "width": 340, "height": 512, "batch_size": 3, "input_prompt": "battle bunny riven, 1girl, strapless leotard, pantyhose, rabbit ears, folded ponytail, white hair, necktie, wrist cuffs, sitting on sofa, legs up, elegant room, best quality, masterpiece", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background", "checkpoint_model": "Pastel.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9", { input: { cfg: 7, lora: "gaming/Battle_Bunny_Riven.safetensors", steps: 20, width: 340, height: 512, batch_size: 3, input_prompt: "battle bunny riven, 1girl, strapless leotard, pantyhose, rabbit ears, folded ponytail, white hair, necktie, wrist cuffs, sitting on sofa, legs up, elegant room, best quality, masterpiece", sampler_name: "dpmpp_2m", lora_strength: 1, upscale_factor: 3, negative_prompt: "(worst quality:1.4), (low quality:1.4), simple background", checkpoint_model: "Pastel.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9", input={ "cfg": 7, "lora": "gaming/Battle_Bunny_Riven.safetensors", "steps": 20, "width": 340, "height": 512, "batch_size": 3, "input_prompt": "battle bunny riven, 1girl, strapless leotard, pantyhose, rabbit ears, folded ponytail, white hair, necktie, wrist cuffs, sitting on sofa, legs up, elegant room, best quality, masterpiece", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background", "checkpoint_model": "Pastel.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9", "input": { "cfg": 7, "lora": "gaming/Battle_Bunny_Riven.safetensors", "steps": 20, "width": 340, "height": 512, "batch_size": 3, "input_prompt": "battle bunny riven, 1girl, strapless leotard, pantyhose, rabbit ears, folded ponytail, white hair, necktie, wrist cuffs, sitting on sofa, legs up, elegant room, best quality, masterpiece", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background", "checkpoint_model": "Pastel.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-01-28T20:27:01.953142Z", "created_at": "2024-01-28T20:16:23.582864Z", "data_removed": false, "error": null, "id": "r4ii5ctbxgdrvvzmq6ppee4tru", "input": { "cfg": 7, "lora": "gaming/Battle_Bunny_Riven.safetensors", "steps": 20, "width": 340, "height": 512, "batch_size": 3, "input_prompt": "battle bunny riven, 1girl, strapless leotard, pantyhose, rabbit ears, folded ponytail, white hair, necktie, wrist cuffs, sitting on sofa, legs up, elegant room, best quality, masterpiece", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background", "checkpoint_model": "Pastel.safetensors" }, "logs": "Using seed: 903839\ngot prompt\nmodel_type EPS\nadm 0\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nmissing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}\nleft over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nLeftover VAE keys ['model_ema.decay', 'model_ema.num_updates']\nRequested to load SD1ClipModel\nLoading 1 new model\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:07, 2.45it/s]\n 15%|█▌ | 3/20 [00:00<00:02, 6.09it/s]\n 25%|██▌ | 5/20 [00:00<00:01, 9.03it/s]\n 35%|███▌ | 7/20 [00:00<00:01, 11.33it/s]\n 45%|████▌ | 9/20 [00:00<00:00, 12.98it/s]\n 55%|█████▌ | 11/20 [00:01<00:00, 14.21it/s]\n 65%|██████▌ | 13/20 [00:01<00:00, 15.16it/s]\n 75%|███████▌ | 15/20 [00:01<00:00, 15.83it/s]\n 85%|████████▌ | 17/20 [00:01<00:00, 16.33it/s]\n 95%|█████████▌| 19/20 [00:01<00:00, 16.70it/s]\n100%|██████████| 20/20 [00:01<00:00, 12.90it/s]\nRequested to load AutoencoderKL\nLoading 1 new model\nIterativeLatentUpscale[1/3]: 560.0x853.3 (scale:1.67)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:03, 5.07it/s]\n 15%|█▌ | 3/20 [00:00<00:02, 8.00it/s]\n 20%|██ | 4/20 [00:00<00:02, 7.97it/s]\n 25%|██▌ | 5/20 [00:00<00:01, 8.11it/s]\n 30%|███ | 6/20 [00:00<00:01, 8.13it/s]\n 35%|███▌ | 7/20 [00:00<00:01, 8.13it/s]\n 40%|████ | 8/20 [00:01<00:01, 8.12it/s]\n 45%|████▌ | 9/20 [00:01<00:01, 8.13it/s]\n 50%|█████ | 10/20 [00:01<00:01, 8.14it/s]\n 55%|█████▌ | 11/20 [00:01<00:01, 8.13it/s]\n 60%|██████ | 12/20 [00:01<00:00, 8.15it/s]\n 65%|██████▌ | 13/20 [00:01<00:00, 8.16it/s]\n 70%|███████ | 14/20 [00:01<00:00, 8.16it/s]\n 75%|███████▌ | 15/20 [00:01<00:00, 8.16it/s]\n 80%|████████ | 16/20 [00:01<00:00, 8.15it/s]\n 85%|████████▌ | 17/20 [00:02<00:00, 8.16it/s]\n 90%|█████████ | 18/20 [00:02<00:00, 8.15it/s]\n 95%|█████████▌| 19/20 [00:02<00:00, 8.15it/s]\n100%|██████████| 20/20 [00:02<00:00, 8.13it/s]\n100%|██████████| 20/20 [00:02<00:00, 8.05it/s]\nIterativeLatentUpscale[2/3]: 784.0x1194.7 (scale:2.33)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:04, 4.54it/s]\n 10%|█ | 2/20 [00:00<00:04, 4.40it/s]\n 15%|█▌ | 3/20 [00:00<00:04, 3.60it/s]\n 20%|██ | 4/20 [00:01<00:04, 3.31it/s]\n 25%|██▌ | 5/20 [00:01<00:04, 3.17it/s]\n 30%|███ | 6/20 [00:01<00:04, 3.10it/s]\n 35%|███▌ | 7/20 [00:02<00:04, 3.05it/s]\n 40%|████ | 8/20 [00:02<00:03, 3.02it/s]\n 45%|████▌ | 9/20 [00:02<00:03, 3.00it/s]\n 50%|█████ | 10/20 [00:03<00:03, 2.98it/s]\n 55%|█████▌ | 11/20 [00:03<00:03, 2.97it/s]\n 60%|██████ | 12/20 [00:03<00:02, 2.97it/s]\n 65%|██████▌ | 13/20 [00:04<00:02, 2.96it/s]\n 70%|███████ | 14/20 [00:04<00:02, 2.96it/s]\n 75%|███████▌ | 15/20 [00:04<00:01, 2.96it/s]\n 80%|████████ | 16/20 [00:05<00:01, 2.95it/s]\n 85%|████████▌ | 17/20 [00:05<00:01, 2.95it/s]\n 90%|█████████ | 18/20 [00:05<00:00, 2.95it/s]\n 95%|█████████▌| 19/20 [00:06<00:00, 2.95it/s]\n100%|██████████| 20/20 [00:06<00:00, 2.95it/s]\n100%|██████████| 20/20 [00:06<00:00, 3.05it/s]\nIterativeLatentUpscale[Final]: 1008.0x1536.0 (scale:3.00)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:03, 4.80it/s]\n 10%|█ | 2/20 [00:00<00:08, 2.07it/s]\n 15%|█▌ | 3/20 [00:01<00:09, 1.75it/s]\n 20%|██ | 4/20 [00:02<00:09, 1.64it/s]\n 25%|██▌ | 5/20 [00:02<00:09, 1.58it/s]\n 30%|███ | 6/20 [00:03<00:09, 1.54it/s]\n 35%|███▌ | 7/20 [00:04<00:08, 1.52it/s]\n 40%|████ | 8/20 [00:04<00:07, 1.51it/s]\n 45%|████▌ | 9/20 [00:05<00:07, 1.50it/s]\n 50%|█████ | 10/20 [00:06<00:06, 1.49it/s]\n 55%|█████▌ | 11/20 [00:06<00:06, 1.49it/s]\n 60%|██████ | 12/20 [00:07<00:05, 1.49it/s]\n 65%|██████▌ | 13/20 [00:08<00:04, 1.48it/s]\n 70%|███████ | 14/20 [00:08<00:04, 1.48it/s]\n 75%|███████▌ | 15/20 [00:09<00:03, 1.48it/s]\n 80%|████████ | 16/20 [00:10<00:02, 1.48it/s]\n 85%|████████▌ | 17/20 [00:11<00:02, 1.48it/s]\n 90%|█████████ | 18/20 [00:11<00:01, 1.48it/s]\n 95%|█████████▌| 19/20 [00:12<00:00, 1.48it/s]\n100%|██████████| 20/20 [00:13<00:00, 1.48it/s]\n100%|██████████| 20/20 [00:13<00:00, 1.53it/s]\nPrompt executed in 38.95 seconds\nnode output: {'images': [{'filename': 'ComfyUI_temp_axcxs_00001_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_axcxs_00002_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_axcxs_00003_.png', 'subfolder': '', 'type': 'temp'}]}\ntemp\ntemp\ntemp\nnode output: {'images': [{'filename': 'ComfyUI_00045_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00046_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00047_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\noutput\n3 images generated successfully", "metrics": { "predict_time": 42.276786, "total_time": 638.370278 }, "output": [ "https://replicate.delivery/pbxt/m2PxwfGXt8yMRyrwCYcz0Dbto6MzRgC7UfedncgXdNUrUNikA/out-0.png", "https://replicate.delivery/pbxt/yTiYzPytdmKfMioquNZJfB8t4jkgAyUQtKEwUrOBBnjVqGRSA/out-1.png", "https://replicate.delivery/pbxt/6LuseN47GuTfWkMA3qGuEqNCjmWK8cFzopz1seYsVTBrUNikA/out-2.png" ], "started_at": "2024-01-28T20:26:19.676356Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/r4ii5ctbxgdrvvzmq6ppee4tru", "cancel": "https://api.replicate.com/v1/predictions/r4ii5ctbxgdrvvzmq6ppee4tru/cancel" }, "version": "58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9" }
Generated inUsing seed: 903839 got prompt model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE Leftover VAE keys ['model_ema.decay', 'model_ema.num_updates'] Requested to load SD1ClipModel Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:07, 2.45it/s] 15%|█▌ | 3/20 [00:00<00:02, 6.09it/s] 25%|██▌ | 5/20 [00:00<00:01, 9.03it/s] 35%|███▌ | 7/20 [00:00<00:01, 11.33it/s] 45%|████▌ | 9/20 [00:00<00:00, 12.98it/s] 55%|█████▌ | 11/20 [00:01<00:00, 14.21it/s] 65%|██████▌ | 13/20 [00:01<00:00, 15.16it/s] 75%|███████▌ | 15/20 [00:01<00:00, 15.83it/s] 85%|████████▌ | 17/20 [00:01<00:00, 16.33it/s] 95%|█████████▌| 19/20 [00:01<00:00, 16.70it/s] 100%|██████████| 20/20 [00:01<00:00, 12.90it/s] Requested to load AutoencoderKL Loading 1 new model IterativeLatentUpscale[1/3]: 560.0x853.3 (scale:1.67) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:03, 5.07it/s] 15%|█▌ | 3/20 [00:00<00:02, 8.00it/s] 20%|██ | 4/20 [00:00<00:02, 7.97it/s] 25%|██▌ | 5/20 [00:00<00:01, 8.11it/s] 30%|███ | 6/20 [00:00<00:01, 8.13it/s] 35%|███▌ | 7/20 [00:00<00:01, 8.13it/s] 40%|████ | 8/20 [00:01<00:01, 8.12it/s] 45%|████▌ | 9/20 [00:01<00:01, 8.13it/s] 50%|█████ | 10/20 [00:01<00:01, 8.14it/s] 55%|█████▌ | 11/20 [00:01<00:01, 8.13it/s] 60%|██████ | 12/20 [00:01<00:00, 8.15it/s] 65%|██████▌ | 13/20 [00:01<00:00, 8.16it/s] 70%|███████ | 14/20 [00:01<00:00, 8.16it/s] 75%|███████▌ | 15/20 [00:01<00:00, 8.16it/s] 80%|████████ | 16/20 [00:01<00:00, 8.15it/s] 85%|████████▌ | 17/20 [00:02<00:00, 8.16it/s] 90%|█████████ | 18/20 [00:02<00:00, 8.15it/s] 95%|█████████▌| 19/20 [00:02<00:00, 8.15it/s] 100%|██████████| 20/20 [00:02<00:00, 8.13it/s] 100%|██████████| 20/20 [00:02<00:00, 8.05it/s] IterativeLatentUpscale[2/3]: 784.0x1194.7 (scale:2.33) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:04, 4.54it/s] 10%|█ | 2/20 [00:00<00:04, 4.40it/s] 15%|█▌ | 3/20 [00:00<00:04, 3.60it/s] 20%|██ | 4/20 [00:01<00:04, 3.31it/s] 25%|██▌ | 5/20 [00:01<00:04, 3.17it/s] 30%|███ | 6/20 [00:01<00:04, 3.10it/s] 35%|███▌ | 7/20 [00:02<00:04, 3.05it/s] 40%|████ | 8/20 [00:02<00:03, 3.02it/s] 45%|████▌ | 9/20 [00:02<00:03, 3.00it/s] 50%|█████ | 10/20 [00:03<00:03, 2.98it/s] 55%|█████▌ | 11/20 [00:03<00:03, 2.97it/s] 60%|██████ | 12/20 [00:03<00:02, 2.97it/s] 65%|██████▌ | 13/20 [00:04<00:02, 2.96it/s] 70%|███████ | 14/20 [00:04<00:02, 2.96it/s] 75%|███████▌ | 15/20 [00:04<00:01, 2.96it/s] 80%|████████ | 16/20 [00:05<00:01, 2.95it/s] 85%|████████▌ | 17/20 [00:05<00:01, 2.95it/s] 90%|█████████ | 18/20 [00:05<00:00, 2.95it/s] 95%|█████████▌| 19/20 [00:06<00:00, 2.95it/s] 100%|██████████| 20/20 [00:06<00:00, 2.95it/s] 100%|██████████| 20/20 [00:06<00:00, 3.05it/s] IterativeLatentUpscale[Final]: 1008.0x1536.0 (scale:3.00) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:03, 4.80it/s] 10%|█ | 2/20 [00:00<00:08, 2.07it/s] 15%|█▌ | 3/20 [00:01<00:09, 1.75it/s] 20%|██ | 4/20 [00:02<00:09, 1.64it/s] 25%|██▌ | 5/20 [00:02<00:09, 1.58it/s] 30%|███ | 6/20 [00:03<00:09, 1.54it/s] 35%|███▌ | 7/20 [00:04<00:08, 1.52it/s] 40%|████ | 8/20 [00:04<00:07, 1.51it/s] 45%|████▌ | 9/20 [00:05<00:07, 1.50it/s] 50%|█████ | 10/20 [00:06<00:06, 1.49it/s] 55%|█████▌ | 11/20 [00:06<00:06, 1.49it/s] 60%|██████ | 12/20 [00:07<00:05, 1.49it/s] 65%|██████▌ | 13/20 [00:08<00:04, 1.48it/s] 70%|███████ | 14/20 [00:08<00:04, 1.48it/s] 75%|███████▌ | 15/20 [00:09<00:03, 1.48it/s] 80%|████████ | 16/20 [00:10<00:02, 1.48it/s] 85%|████████▌ | 17/20 [00:11<00:02, 1.48it/s] 90%|█████████ | 18/20 [00:11<00:01, 1.48it/s] 95%|█████████▌| 19/20 [00:12<00:00, 1.48it/s] 100%|██████████| 20/20 [00:13<00:00, 1.48it/s] 100%|██████████| 20/20 [00:13<00:00, 1.53it/s] Prompt executed in 38.95 seconds node output: {'images': [{'filename': 'ComfyUI_temp_axcxs_00001_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_axcxs_00002_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_axcxs_00003_.png', 'subfolder': '', 'type': 'temp'}]} temp temp temp node output: {'images': [{'filename': 'ComfyUI_00045_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00046_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00047_.png', 'subfolder': '', 'type': 'output'}]} output output output 3 images generated successfully
Prediction
bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9IDwoux2vtbb3h4ok5bip4xls3cueStatusSucceededSourceWebHardwareA100 (40GB)Total durationCreatedInput
- cfg
- 7
- lora
- anime/Lucy_Cyberpunk.safetensors
- steps
- 20
- width
- 340
- height
- 512
- batch_size
- 2
- input_prompt
- masterpiece, best quality, highres, lu1, cyborg, multicolored hair, makeup, bare shoulders, black leotard, highleg leotard, (thong:1.1), white jacket, open jacket, belt, shorts, cowboy shot, planet, moon, earth (planet)
- sampler_name
- dpmpp_2m
- lora_strength
- 1
- upscale_factor
- 3
- negative_prompt
- (worst quality:1.4), (low quality:1.4), simple background, bad anatomy
- checkpoint_model
- MeinaAlter.safetensors
{ "cfg": 7, "lora": "anime/Lucy_Cyberpunk.safetensors", "steps": 20, "width": 340, "height": 512, "batch_size": 2, "input_prompt": "masterpiece, best quality, highres, lu1, cyborg, multicolored hair, makeup, bare shoulders, black leotard, highleg leotard, (thong:1.1), white jacket, open jacket, belt, shorts, cowboy shot, planet, moon, earth (planet)", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "MeinaAlter.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9", { input: { cfg: 7, lora: "anime/Lucy_Cyberpunk.safetensors", steps: 20, width: 340, height: 512, batch_size: 2, input_prompt: "masterpiece, best quality, highres, lu1, cyborg, multicolored hair, makeup, bare shoulders, black leotard, highleg leotard, (thong:1.1), white jacket, open jacket, belt, shorts, cowboy shot, planet, moon, earth (planet)", sampler_name: "dpmpp_2m", lora_strength: 1, upscale_factor: 3, negative_prompt: "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", checkpoint_model: "MeinaAlter.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9", input={ "cfg": 7, "lora": "anime/Lucy_Cyberpunk.safetensors", "steps": 20, "width": 340, "height": 512, "batch_size": 2, "input_prompt": "masterpiece, best quality, highres, lu1, cyborg, multicolored hair, makeup, bare shoulders, black leotard, highleg leotard, (thong:1.1), white jacket, open jacket, belt, shorts, cowboy shot, planet, moon, earth (planet)", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "MeinaAlter.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9", "input": { "cfg": 7, "lora": "anime/Lucy_Cyberpunk.safetensors", "steps": 20, "width": 340, "height": 512, "batch_size": 2, "input_prompt": "masterpiece, best quality, highres, lu1, cyborg, multicolored hair, makeup, bare shoulders, black leotard, highleg leotard, (thong:1.1), white jacket, open jacket, belt, shorts, cowboy shot, planet, moon, earth (planet)", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "MeinaAlter.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-01-28T23:19:35.776284Z", "created_at": "2024-01-28T23:10:52.800807Z", "data_removed": false, "error": null, "id": "woux2vtbb3h4ok5bip4xls3cue", "input": { "cfg": 7, "lora": "anime/Lucy_Cyberpunk.safetensors", "steps": 20, "width": 340, "height": 512, "batch_size": 2, "input_prompt": "masterpiece, best quality, highres, lu1, cyborg, multicolored hair, makeup, bare shoulders, black leotard, highleg leotard, (thong:1.1), white jacket, open jacket, belt, shorts, cowboy shot, planet, moon, earth (planet)", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "MeinaAlter.safetensors" }, "logs": "Using seed: 13161991\ngot prompt\nmodel_type EPS\nadm 0\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nmissing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}\nleft over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nLeftover VAE keys ['model_ema.decay', 'model_ema.num_updates']\nRequested to load SD1ClipModel\nLoading 1 new model\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:07, 2.52it/s]\n 15%|█▌ | 3/20 [00:00<00:02, 6.66it/s]\n 25%|██▌ | 5/20 [00:00<00:01, 9.81it/s]\n 35%|███▌ | 7/20 [00:00<00:01, 11.97it/s]\n 45%|████▌ | 9/20 [00:00<00:00, 13.48it/s]\n 55%|█████▌ | 11/20 [00:00<00:00, 14.79it/s]\n 65%|██████▌ | 13/20 [00:01<00:00, 15.68it/s]\n 75%|███████▌ | 15/20 [00:01<00:00, 16.33it/s]\n 85%|████████▌ | 17/20 [00:01<00:00, 16.83it/s]\n 95%|█████████▌| 19/20 [00:01<00:00, 17.07it/s]\n100%|██████████| 20/20 [00:01<00:00, 13.45it/s]\nRequested to load AutoencoderKL\nLoading 1 new model\nIterativeLatentUpscale[1/3]: 560.0x853.3 (scale:1.67)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:03, 5.35it/s]\n 15%|█▌ | 3/20 [00:00<00:01, 9.57it/s]\n 25%|██▌ | 5/20 [00:00<00:01, 10.54it/s]\n 35%|███▌ | 7/20 [00:00<00:01, 10.99it/s]\n 45%|████▌ | 9/20 [00:00<00:00, 11.23it/s]\n 55%|█████▌ | 11/20 [00:01<00:00, 11.28it/s]\n 65%|██████▌ | 13/20 [00:01<00:00, 11.41it/s]\n 75%|███████▌ | 15/20 [00:01<00:00, 11.50it/s]\n 85%|████████▌ | 17/20 [00:01<00:00, 11.54it/s]\n 95%|█████████▌| 19/20 [00:01<00:00, 11.58it/s]\n100%|██████████| 20/20 [00:01<00:00, 11.11it/s]\nIterativeLatentUpscale[2/3]: 784.0x1194.7 (scale:2.33)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:03, 5.96it/s]\n 10%|█ | 2/20 [00:00<00:02, 6.75it/s]\n 15%|█▌ | 3/20 [00:00<00:03, 5.36it/s]\n 20%|██ | 4/20 [00:00<00:03, 4.89it/s]\n 25%|██▌ | 5/20 [00:00<00:03, 4.66it/s]\n 30%|███ | 6/20 [00:01<00:03, 4.53it/s]\n 35%|███▌ | 7/20 [00:01<00:02, 4.45it/s]\n 40%|████ | 8/20 [00:01<00:02, 4.40it/s]\n 45%|████▌ | 9/20 [00:01<00:02, 4.36it/s]\n 50%|█████ | 10/20 [00:02<00:02, 4.34it/s]\n 55%|█████▌ | 11/20 [00:02<00:02, 4.33it/s]\n 60%|██████ | 12/20 [00:02<00:01, 4.32it/s]\n 65%|██████▌ | 13/20 [00:02<00:01, 4.32it/s]\n 70%|███████ | 14/20 [00:03<00:01, 4.31it/s]\n 75%|███████▌ | 15/20 [00:03<00:01, 4.29it/s]\n 80%|████████ | 16/20 [00:03<00:00, 4.28it/s]\n 85%|████████▌ | 17/20 [00:03<00:00, 4.29it/s]\n 90%|█████████ | 18/20 [00:04<00:00, 4.29it/s]\n 95%|█████████▌| 19/20 [00:04<00:00, 4.30it/s]\n100%|██████████| 20/20 [00:04<00:00, 4.29it/s]\n100%|██████████| 20/20 [00:04<00:00, 4.45it/s]\nIterativeLatentUpscale[Final]: 1008.0x1536.0 (scale:3.00)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:03, 5.69it/s]\n 10%|█ | 2/20 [00:00<00:05, 3.07it/s]\n 15%|█▌ | 3/20 [00:01<00:06, 2.59it/s]\n 20%|██ | 4/20 [00:01<00:06, 2.39it/s]\n 25%|██▌ | 5/20 [00:01<00:06, 2.31it/s]\n 30%|███ | 6/20 [00:02<00:06, 2.27it/s]\n 35%|███▌ | 7/20 [00:02<00:05, 2.24it/s]\n 40%|████ | 8/20 [00:03<00:05, 2.22it/s]\n 45%|████▌ | 9/20 [00:03<00:04, 2.21it/s]\n 50%|█████ | 10/20 [00:04<00:04, 2.20it/s]\n 55%|█████▌ | 11/20 [00:04<00:04, 2.19it/s]\n 60%|██████ | 12/20 [00:05<00:03, 2.17it/s]\n 65%|██████▌ | 13/20 [00:05<00:03, 2.17it/s]\n 70%|███████ | 14/20 [00:06<00:02, 2.17it/s]\n 75%|███████▌ | 15/20 [00:06<00:02, 2.17it/s]\n 80%|████████ | 16/20 [00:07<00:01, 2.17it/s]\n 85%|████████▌ | 17/20 [00:07<00:01, 2.17it/s]\n 90%|█████████ | 18/20 [00:07<00:00, 2.17it/s]\n 95%|█████████▌| 19/20 [00:08<00:00, 2.17it/s]\n100%|██████████| 20/20 [00:08<00:00, 2.17it/s]\n100%|██████████| 20/20 [00:08<00:00, 2.25it/s]\nPrompt executed in 27.81 seconds\nnode output: {'images': [{'filename': 'ComfyUI_temp_kprtr_00001_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_kprtr_00002_.png', 'subfolder': '', 'type': 'temp'}]}\ntemp\ntemp\nnode output: {'images': [{'filename': 'ComfyUI_00045_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00046_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\n2 images generated successfully", "metrics": { "predict_time": 30.031088, "total_time": 522.975477 }, "output": [ "https://replicate.delivery/pbxt/UMv7UMTpfgUvayZmiVXzUP9uUzJyAoSRrCLeDKm8kcZHMJRSA/out-0.png", "https://replicate.delivery/pbxt/hwQDwyH5TOpDI5BJOgByAvygePIj2cAnwCDGnq8NVWoDmkIJA/out-1.png" ], "started_at": "2024-01-28T23:19:05.745196Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/woux2vtbb3h4ok5bip4xls3cue", "cancel": "https://api.replicate.com/v1/predictions/woux2vtbb3h4ok5bip4xls3cue/cancel" }, "version": "58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9" }
Generated inUsing seed: 13161991 got prompt model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE Leftover VAE keys ['model_ema.decay', 'model_ema.num_updates'] Requested to load SD1ClipModel Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:07, 2.52it/s] 15%|█▌ | 3/20 [00:00<00:02, 6.66it/s] 25%|██▌ | 5/20 [00:00<00:01, 9.81it/s] 35%|███▌ | 7/20 [00:00<00:01, 11.97it/s] 45%|████▌ | 9/20 [00:00<00:00, 13.48it/s] 55%|█████▌ | 11/20 [00:00<00:00, 14.79it/s] 65%|██████▌ | 13/20 [00:01<00:00, 15.68it/s] 75%|███████▌ | 15/20 [00:01<00:00, 16.33it/s] 85%|████████▌ | 17/20 [00:01<00:00, 16.83it/s] 95%|█████████▌| 19/20 [00:01<00:00, 17.07it/s] 100%|██████████| 20/20 [00:01<00:00, 13.45it/s] Requested to load AutoencoderKL Loading 1 new model IterativeLatentUpscale[1/3]: 560.0x853.3 (scale:1.67) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:03, 5.35it/s] 15%|█▌ | 3/20 [00:00<00:01, 9.57it/s] 25%|██▌ | 5/20 [00:00<00:01, 10.54it/s] 35%|███▌ | 7/20 [00:00<00:01, 10.99it/s] 45%|████▌ | 9/20 [00:00<00:00, 11.23it/s] 55%|█████▌ | 11/20 [00:01<00:00, 11.28it/s] 65%|██████▌ | 13/20 [00:01<00:00, 11.41it/s] 75%|███████▌ | 15/20 [00:01<00:00, 11.50it/s] 85%|████████▌ | 17/20 [00:01<00:00, 11.54it/s] 95%|█████████▌| 19/20 [00:01<00:00, 11.58it/s] 100%|██████████| 20/20 [00:01<00:00, 11.11it/s] IterativeLatentUpscale[2/3]: 784.0x1194.7 (scale:2.33) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:03, 5.96it/s] 10%|█ | 2/20 [00:00<00:02, 6.75it/s] 15%|█▌ | 3/20 [00:00<00:03, 5.36it/s] 20%|██ | 4/20 [00:00<00:03, 4.89it/s] 25%|██▌ | 5/20 [00:00<00:03, 4.66it/s] 30%|███ | 6/20 [00:01<00:03, 4.53it/s] 35%|███▌ | 7/20 [00:01<00:02, 4.45it/s] 40%|████ | 8/20 [00:01<00:02, 4.40it/s] 45%|████▌ | 9/20 [00:01<00:02, 4.36it/s] 50%|█████ | 10/20 [00:02<00:02, 4.34it/s] 55%|█████▌ | 11/20 [00:02<00:02, 4.33it/s] 60%|██████ | 12/20 [00:02<00:01, 4.32it/s] 65%|██████▌ | 13/20 [00:02<00:01, 4.32it/s] 70%|███████ | 14/20 [00:03<00:01, 4.31it/s] 75%|███████▌ | 15/20 [00:03<00:01, 4.29it/s] 80%|████████ | 16/20 [00:03<00:00, 4.28it/s] 85%|████████▌ | 17/20 [00:03<00:00, 4.29it/s] 90%|█████████ | 18/20 [00:04<00:00, 4.29it/s] 95%|█████████▌| 19/20 [00:04<00:00, 4.30it/s] 100%|██████████| 20/20 [00:04<00:00, 4.29it/s] 100%|██████████| 20/20 [00:04<00:00, 4.45it/s] IterativeLatentUpscale[Final]: 1008.0x1536.0 (scale:3.00) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:03, 5.69it/s] 10%|█ | 2/20 [00:00<00:05, 3.07it/s] 15%|█▌ | 3/20 [00:01<00:06, 2.59it/s] 20%|██ | 4/20 [00:01<00:06, 2.39it/s] 25%|██▌ | 5/20 [00:01<00:06, 2.31it/s] 30%|███ | 6/20 [00:02<00:06, 2.27it/s] 35%|███▌ | 7/20 [00:02<00:05, 2.24it/s] 40%|████ | 8/20 [00:03<00:05, 2.22it/s] 45%|████▌ | 9/20 [00:03<00:04, 2.21it/s] 50%|█████ | 10/20 [00:04<00:04, 2.20it/s] 55%|█████▌ | 11/20 [00:04<00:04, 2.19it/s] 60%|██████ | 12/20 [00:05<00:03, 2.17it/s] 65%|██████▌ | 13/20 [00:05<00:03, 2.17it/s] 70%|███████ | 14/20 [00:06<00:02, 2.17it/s] 75%|███████▌ | 15/20 [00:06<00:02, 2.17it/s] 80%|████████ | 16/20 [00:07<00:01, 2.17it/s] 85%|████████▌ | 17/20 [00:07<00:01, 2.17it/s] 90%|█████████ | 18/20 [00:07<00:00, 2.17it/s] 95%|█████████▌| 19/20 [00:08<00:00, 2.17it/s] 100%|██████████| 20/20 [00:08<00:00, 2.17it/s] 100%|██████████| 20/20 [00:08<00:00, 2.25it/s] Prompt executed in 27.81 seconds node output: {'images': [{'filename': 'ComfyUI_temp_kprtr_00001_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_kprtr_00002_.png', 'subfolder': '', 'type': 'temp'}]} temp temp node output: {'images': [{'filename': 'ComfyUI_00045_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00046_.png', 'subfolder': '', 'type': 'output'}]} output output 2 images generated successfully
Prediction
bryantanjw/entropy-lol:cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1IDk2y3j3rb6itcwgo7nea7qmtkvmStatusSucceededSourceWebHardwareA100 (40GB)Total durationCreatedInput
- cfg
- 7
- lora
- cinematic/Spider_Gwen.safetensors
- steps
- 40
- width
- 360
- height
- 540
- batch_size
- 3
- input_prompt
- (masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2),highly detailed face,1girl,gwen_stacy,full body, (portrait:1.3),, spider-gwen suit , bodysuit , superhero,(extremely detailed,highres, highest detailed,8k,absurdres,CG),cyberpunk city, white sports shoes with red stripes, happy, from below, sitting, crossed legs, cloud,city, skyscraper,looking at viewer, white ballet shoes with red stripes
- sampler_name
- euler_ancestral
- lora_strength
- 0.9
- upscale_factor
- 2.8
- negative_prompt
- (worst quality:1.4), (low quality:1.4), simple background, bad anatomy
- checkpoint_model
- GhostMix.safetensors
{ "cfg": 7, "lora": "cinematic/Spider_Gwen.safetensors", "steps": 40, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "(masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2),highly detailed face,1girl,gwen_stacy,full body, (portrait:1.3),, spider-gwen suit , bodysuit , superhero,(extremely detailed,highres, highest detailed,8k,absurdres,CG),cyberpunk city, white sports shoes with red stripes, happy, from below, sitting, crossed legs, cloud,city, skyscraper,looking at viewer, white ballet shoes with red stripes", "sampler_name": "euler_ancestral", "lora_strength": 0.9, "upscale_factor": 2.8, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "GhostMix.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1", { input: { cfg: 7, lora: "cinematic/Spider_Gwen.safetensors", steps: 40, width: 360, height: 540, batch_size: 3, input_prompt: "(masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2),highly detailed face,1girl,gwen_stacy,full body, (portrait:1.3),, spider-gwen suit , bodysuit , superhero,(extremely detailed,highres, highest detailed,8k,absurdres,CG),cyberpunk city, white sports shoes with red stripes, happy, from below, sitting, crossed legs, cloud,city, skyscraper,looking at viewer, white ballet shoes with red stripes", sampler_name: "euler_ancestral", lora_strength: 0.9, upscale_factor: 2.8, negative_prompt: "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", checkpoint_model: "GhostMix.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1", input={ "cfg": 7, "lora": "cinematic/Spider_Gwen.safetensors", "steps": 40, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "(masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2),highly detailed face,1girl,gwen_stacy,full body, (portrait:1.3),, spider-gwen suit , bodysuit , superhero,(extremely detailed,highres, highest detailed,8k,absurdres,CG),cyberpunk city, white sports shoes with red stripes, happy, from below, sitting, crossed legs, cloud,city, skyscraper,looking at viewer, white ballet shoes with red stripes", "sampler_name": "euler_ancestral", "lora_strength": 0.9, "upscale_factor": 2.8, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "GhostMix.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1", "input": { "cfg": 7, "lora": "cinematic/Spider_Gwen.safetensors", "steps": 40, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "(masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2),highly detailed face,1girl,gwen_stacy,full body, (portrait:1.3),, spider-gwen suit , bodysuit , superhero,(extremely detailed,highres, highest detailed,8k,absurdres,CG),cyberpunk city, white sports shoes with red stripes, happy, from below, sitting, crossed legs, cloud,city, skyscraper,looking at viewer, white ballet shoes with red stripes", "sampler_name": "euler_ancestral", "lora_strength": 0.9, "upscale_factor": 2.8, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "GhostMix.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-01-29T02:55:58.440639Z", "created_at": "2024-01-29T02:55:13.575806Z", "data_removed": false, "error": null, "id": "k2y3j3rb6itcwgo7nea7qmtkvm", "input": { "cfg": 7, "lora": "cinematic/Spider_Gwen.safetensors", "steps": 40, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "(masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2),highly detailed face,1girl,gwen_stacy,full body, (portrait:1.3),, spider-gwen suit , bodysuit , superhero,(extremely detailed,highres, highest detailed,8k,absurdres,CG),cyberpunk city, white sports shoes with red stripes, happy, from below, sitting, crossed legs, cloud,city, skyscraper,looking at viewer, white ballet shoes with red stripes", "sampler_name": "euler_ancestral", "lora_strength": 0.9, "upscale_factor": 2.8, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "GhostMix.safetensors" }, "logs": "Using seed: 14927047\ngot prompt\nRequested to load SD1ClipModel\nLoading 1 new model\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/40 [00:00<?, ?it/s]\n 5%|▌ | 2/40 [00:00<00:03, 11.80it/s]\n 10%|█ | 4/40 [00:00<00:02, 12.72it/s]\n 15%|█▌ | 6/40 [00:00<00:02, 13.18it/s]\n 20%|██ | 8/40 [00:00<00:02, 13.22it/s]\n 25%|██▌ | 10/40 [00:00<00:02, 13.44it/s]\n 30%|███ | 12/40 [00:00<00:02, 13.57it/s]\n 35%|███▌ | 14/40 [00:01<00:01, 13.66it/s]\n 40%|████ | 16/40 [00:01<00:01, 13.71it/s]\n 45%|████▌ | 18/40 [00:01<00:01, 13.71it/s]\n 50%|█████ | 20/40 [00:01<00:01, 13.70it/s]\n 55%|█████▌ | 22/40 [00:01<00:01, 13.54it/s]\n 60%|██████ | 24/40 [00:01<00:01, 13.69it/s]\n 65%|██████▌ | 26/40 [00:01<00:01, 13.78it/s]\n 70%|███████ | 28/40 [00:02<00:00, 13.82it/s]\n 75%|███████▌ | 30/40 [00:02<00:00, 13.72it/s]\n 80%|████████ | 32/40 [00:02<00:00, 13.69it/s]\n 85%|████████▌ | 34/40 [00:02<00:00, 13.73it/s]\n 90%|█████████ | 36/40 [00:02<00:00, 13.64it/s]\n 95%|█████████▌| 38/40 [00:02<00:00, 13.63it/s]\n100%|██████████| 40/40 [00:02<00:00, 13.57it/s]\n100%|██████████| 40/40 [00:02<00:00, 13.56it/s]\nIterativeLatentUpscale[1/3]: 576.0x857.6 (scale:1.60)\n 0%| | 0/20 [00:00<?, ?it/s]\n 10%|█ | 2/20 [00:00<00:01, 9.87it/s]\n 15%|█▌ | 3/20 [00:00<00:01, 8.77it/s]\n 20%|██ | 4/20 [00:00<00:01, 8.41it/s]\n 25%|██▌ | 5/20 [00:00<00:01, 8.13it/s]\n 30%|███ | 6/20 [00:00<00:01, 7.98it/s]\n 35%|███▌ | 7/20 [00:00<00:01, 7.88it/s]\n 40%|████ | 8/20 [00:00<00:01, 7.82it/s]\n 45%|████▌ | 9/20 [00:01<00:01, 7.65it/s]\n 50%|█████ | 10/20 [00:01<00:01, 7.81it/s]\n 55%|█████▌ | 11/20 [00:01<00:01, 7.77it/s]\n 60%|██████ | 12/20 [00:01<00:01, 7.72it/s]\n 65%|██████▌ | 13/20 [00:01<00:00, 7.73it/s]\n 70%|███████ | 14/20 [00:01<00:00, 7.63it/s]\n 75%|███████▌ | 15/20 [00:01<00:00, 7.73it/s]\n 80%|████████ | 16/20 [00:02<00:00, 7.72it/s]\n 85%|████████▌ | 17/20 [00:02<00:00, 7.58it/s]\n 90%|█████████ | 18/20 [00:02<00:00, 7.73it/s]\n 95%|█████████▌| 19/20 [00:02<00:00, 7.70it/s]\n100%|██████████| 20/20 [00:02<00:00, 7.72it/s]\n100%|██████████| 20/20 [00:02<00:00, 7.86it/s]\nIterativeLatentUpscale[2/3]: 792.0x1179.2 (scale:2.20)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:01, 9.83it/s]\n 10%|█ | 2/20 [00:00<00:04, 4.11it/s]\n 15%|█▌ | 3/20 [00:00<00:04, 3.46it/s]\n 20%|██ | 4/20 [00:01<00:04, 3.22it/s]\n 25%|██▌ | 5/20 [00:01<00:04, 3.10it/s]\n 30%|███ | 6/20 [00:01<00:04, 3.03it/s]\n 35%|███▌ | 7/20 [00:02<00:04, 2.99it/s]\n 40%|████ | 8/20 [00:02<00:04, 2.96it/s]\n 45%|████▌ | 9/20 [00:02<00:03, 2.94it/s]\n 50%|█████ | 10/20 [00:03<00:03, 2.93it/s]\n 55%|█████▌ | 11/20 [00:03<00:03, 2.93it/s]\n 60%|██████ | 12/20 [00:03<00:02, 2.92it/s]\n 65%|██████▌ | 13/20 [00:04<00:02, 2.92it/s]\n 70%|███████ | 14/20 [00:04<00:02, 2.92it/s]\n 75%|███████▌ | 15/20 [00:04<00:01, 2.92it/s]\n 80%|████████ | 16/20 [00:05<00:01, 2.91it/s]\n 85%|████████▌ | 17/20 [00:05<00:01, 2.91it/s]\n 90%|█████████ | 18/20 [00:05<00:00, 2.91it/s]\n 95%|█████████▌| 19/20 [00:06<00:00, 2.91it/s]\n100%|██████████| 20/20 [00:06<00:00, 2.91it/s]\n100%|██████████| 20/20 [00:06<00:00, 3.02it/s]\nIterativeLatentUpscale[Final]: 1008.0x1500.8 (scale:2.80)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:04, 4.63it/s]\n 10%|█ | 2/20 [00:00<00:09, 1.99it/s]\n 15%|█▌ | 3/20 [00:01<00:10, 1.69it/s]\n 20%|██ | 4/20 [00:02<00:10, 1.57it/s]\n 25%|██▌ | 5/20 [00:03<00:09, 1.52it/s]\n 30%|███ | 6/20 [00:03<00:09, 1.48it/s]\n 35%|███▌ | 7/20 [00:04<00:08, 1.46it/s]\n 40%|████ | 8/20 [00:05<00:08, 1.45it/s]\n 45%|████▌ | 9/20 [00:05<00:07, 1.44it/s]\n 50%|█████ | 10/20 [00:06<00:06, 1.44it/s]\n 55%|█████▌ | 11/20 [00:07<00:06, 1.43it/s]\n 60%|██████ | 12/20 [00:07<00:05, 1.43it/s]\n 65%|██████▌ | 13/20 [00:08<00:04, 1.43it/s]\n 70%|███████ | 14/20 [00:09<00:04, 1.43it/s]\n 75%|███████▌ | 15/20 [00:10<00:03, 1.43it/s]\n 80%|████████ | 16/20 [00:10<00:02, 1.43it/s]\n 85%|████████▌ | 17/20 [00:11<00:02, 1.43it/s]\n 90%|█████████ | 18/20 [00:12<00:01, 1.42it/s]\n 95%|█████████▌| 19/20 [00:12<00:00, 1.42it/s]\n100%|██████████| 20/20 [00:13<00:00, 1.42it/s]\n100%|██████████| 20/20 [00:13<00:00, 1.48it/s]\nPrompt executed in 40.01 seconds\nnode output: {'images': [{'filename': 'ComfyUI_temp_aydoz_00022_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_aydoz_00023_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_aydoz_00024_.png', 'subfolder': '', 'type': 'temp'}]}\ntemp\ntemp\ntemp\nnode output: {'images': [{'filename': 'ComfyUI_00066_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00067_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00068_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\noutput\n3 images generated successfully", "metrics": { "predict_time": 44.852051, "total_time": 44.864833 }, "output": [ "https://replicate.delivery/pbxt/A1iIFU3PsAouMNCVhWMygeJVUQGtfTfBdNJzYmkPPHZ5tYikA/out-0.png", "https://replicate.delivery/pbxt/5KSeca6qK3x7aaptFvjBeEkmpQhSQYyyEnxlenKYNKg7tYikA/out-1.png", "https://replicate.delivery/pbxt/W6W1loT6fY1YDSGek3HpdfZBiopdyWisCew5K1rPKDs1bxEJB/out-2.png" ], "started_at": "2024-01-29T02:55:13.588588Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/k2y3j3rb6itcwgo7nea7qmtkvm", "cancel": "https://api.replicate.com/v1/predictions/k2y3j3rb6itcwgo7nea7qmtkvm/cancel" }, "version": "cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1" }
Generated inUsing seed: 14927047 got prompt Requested to load SD1ClipModel Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/40 [00:00<?, ?it/s] 5%|▌ | 2/40 [00:00<00:03, 11.80it/s] 10%|█ | 4/40 [00:00<00:02, 12.72it/s] 15%|█▌ | 6/40 [00:00<00:02, 13.18it/s] 20%|██ | 8/40 [00:00<00:02, 13.22it/s] 25%|██▌ | 10/40 [00:00<00:02, 13.44it/s] 30%|███ | 12/40 [00:00<00:02, 13.57it/s] 35%|███▌ | 14/40 [00:01<00:01, 13.66it/s] 40%|████ | 16/40 [00:01<00:01, 13.71it/s] 45%|████▌ | 18/40 [00:01<00:01, 13.71it/s] 50%|█████ | 20/40 [00:01<00:01, 13.70it/s] 55%|█████▌ | 22/40 [00:01<00:01, 13.54it/s] 60%|██████ | 24/40 [00:01<00:01, 13.69it/s] 65%|██████▌ | 26/40 [00:01<00:01, 13.78it/s] 70%|███████ | 28/40 [00:02<00:00, 13.82it/s] 75%|███████▌ | 30/40 [00:02<00:00, 13.72it/s] 80%|████████ | 32/40 [00:02<00:00, 13.69it/s] 85%|████████▌ | 34/40 [00:02<00:00, 13.73it/s] 90%|█████████ | 36/40 [00:02<00:00, 13.64it/s] 95%|█████████▌| 38/40 [00:02<00:00, 13.63it/s] 100%|██████████| 40/40 [00:02<00:00, 13.57it/s] 100%|██████████| 40/40 [00:02<00:00, 13.56it/s] IterativeLatentUpscale[1/3]: 576.0x857.6 (scale:1.60) 0%| | 0/20 [00:00<?, ?it/s] 10%|█ | 2/20 [00:00<00:01, 9.87it/s] 15%|█▌ | 3/20 [00:00<00:01, 8.77it/s] 20%|██ | 4/20 [00:00<00:01, 8.41it/s] 25%|██▌ | 5/20 [00:00<00:01, 8.13it/s] 30%|███ | 6/20 [00:00<00:01, 7.98it/s] 35%|███▌ | 7/20 [00:00<00:01, 7.88it/s] 40%|████ | 8/20 [00:00<00:01, 7.82it/s] 45%|████▌ | 9/20 [00:01<00:01, 7.65it/s] 50%|█████ | 10/20 [00:01<00:01, 7.81it/s] 55%|█████▌ | 11/20 [00:01<00:01, 7.77it/s] 60%|██████ | 12/20 [00:01<00:01, 7.72it/s] 65%|██████▌ | 13/20 [00:01<00:00, 7.73it/s] 70%|███████ | 14/20 [00:01<00:00, 7.63it/s] 75%|███████▌ | 15/20 [00:01<00:00, 7.73it/s] 80%|████████ | 16/20 [00:02<00:00, 7.72it/s] 85%|████████▌ | 17/20 [00:02<00:00, 7.58it/s] 90%|█████████ | 18/20 [00:02<00:00, 7.73it/s] 95%|█████████▌| 19/20 [00:02<00:00, 7.70it/s] 100%|██████████| 20/20 [00:02<00:00, 7.72it/s] 100%|██████████| 20/20 [00:02<00:00, 7.86it/s] IterativeLatentUpscale[2/3]: 792.0x1179.2 (scale:2.20) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:01, 9.83it/s] 10%|█ | 2/20 [00:00<00:04, 4.11it/s] 15%|█▌ | 3/20 [00:00<00:04, 3.46it/s] 20%|██ | 4/20 [00:01<00:04, 3.22it/s] 25%|██▌ | 5/20 [00:01<00:04, 3.10it/s] 30%|███ | 6/20 [00:01<00:04, 3.03it/s] 35%|███▌ | 7/20 [00:02<00:04, 2.99it/s] 40%|████ | 8/20 [00:02<00:04, 2.96it/s] 45%|████▌ | 9/20 [00:02<00:03, 2.94it/s] 50%|█████ | 10/20 [00:03<00:03, 2.93it/s] 55%|█████▌ | 11/20 [00:03<00:03, 2.93it/s] 60%|██████ | 12/20 [00:03<00:02, 2.92it/s] 65%|██████▌ | 13/20 [00:04<00:02, 2.92it/s] 70%|███████ | 14/20 [00:04<00:02, 2.92it/s] 75%|███████▌ | 15/20 [00:04<00:01, 2.92it/s] 80%|████████ | 16/20 [00:05<00:01, 2.91it/s] 85%|████████▌ | 17/20 [00:05<00:01, 2.91it/s] 90%|█████████ | 18/20 [00:05<00:00, 2.91it/s] 95%|█████████▌| 19/20 [00:06<00:00, 2.91it/s] 100%|██████████| 20/20 [00:06<00:00, 2.91it/s] 100%|██████████| 20/20 [00:06<00:00, 3.02it/s] IterativeLatentUpscale[Final]: 1008.0x1500.8 (scale:2.80) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:04, 4.63it/s] 10%|█ | 2/20 [00:00<00:09, 1.99it/s] 15%|█▌ | 3/20 [00:01<00:10, 1.69it/s] 20%|██ | 4/20 [00:02<00:10, 1.57it/s] 25%|██▌ | 5/20 [00:03<00:09, 1.52it/s] 30%|███ | 6/20 [00:03<00:09, 1.48it/s] 35%|███▌ | 7/20 [00:04<00:08, 1.46it/s] 40%|████ | 8/20 [00:05<00:08, 1.45it/s] 45%|████▌ | 9/20 [00:05<00:07, 1.44it/s] 50%|█████ | 10/20 [00:06<00:06, 1.44it/s] 55%|█████▌ | 11/20 [00:07<00:06, 1.43it/s] 60%|██████ | 12/20 [00:07<00:05, 1.43it/s] 65%|██████▌ | 13/20 [00:08<00:04, 1.43it/s] 70%|███████ | 14/20 [00:09<00:04, 1.43it/s] 75%|███████▌ | 15/20 [00:10<00:03, 1.43it/s] 80%|████████ | 16/20 [00:10<00:02, 1.43it/s] 85%|████████▌ | 17/20 [00:11<00:02, 1.43it/s] 90%|█████████ | 18/20 [00:12<00:01, 1.42it/s] 95%|█████████▌| 19/20 [00:12<00:00, 1.42it/s] 100%|██████████| 20/20 [00:13<00:00, 1.42it/s] 100%|██████████| 20/20 [00:13<00:00, 1.48it/s] Prompt executed in 40.01 seconds node output: {'images': [{'filename': 'ComfyUI_temp_aydoz_00022_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_aydoz_00023_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_aydoz_00024_.png', 'subfolder': '', 'type': 'temp'}]} temp temp temp node output: {'images': [{'filename': 'ComfyUI_00066_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00067_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00068_.png', 'subfolder': '', 'type': 'output'}]} output output output 3 images generated successfully
Prediction
bryantanjw/entropy-lol:cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1IDbxor37rbbhqge3sunibube4ioeStatusSucceededSourceWebHardwareA100 (40GB)Total durationCreatedInput
- cfg
- 5
- lora
- cinematic/Spider_Gwen.safetensors
- steps
- 20
- width
- 360
- height
- 540
- batch_size
- 3
- input_prompt
- 1girl, standing, contrapposto, waist up, dynamic pose, gwen stacy, short hair, platinum blonde hair from behind, head turned, seductive expression, annoyed expression, seductively annoyed, seductive casual 1girl, aerial fireworks, american flag, astronaut, aurora, balcony, building, christmas lights, christmas tree, city, city lights, cityscape, constellation, crescent moon, desert, photorealistic, octane render, best quality, looking at viewer, looking down, sharp focus, (8k), (4k), (Masterpiece), (Best Quality), (realistic skin texture), extremely detailed, intricate, hyper detailed, , illustration, soft lighting, , high resolution, sharp detail,, Masterpiece, best quality, sharp focus, perfect lighting, beautiful eyes, anatomically correct, anatomically correct face, extremely detailed eyes, RAW image,
- sampler_name
- dpmpp_2m
- lora_strength
- 0.6
- upscale_factor
- 2.8
- negative_prompt
- (worst quality:1.4), (low quality:1.4), simple background, bad anatomy
- checkpoint_model
- Aniverse.safetensors
{ "cfg": 5, "lora": "cinematic/Spider_Gwen.safetensors", "steps": 20, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "1girl, standing, contrapposto, waist up, dynamic pose, gwen stacy, short hair, platinum blonde hair from behind, head turned, seductive expression, annoyed expression, seductively annoyed, seductive casual 1girl, aerial fireworks, american flag, astronaut, aurora, balcony, building, christmas lights, christmas tree, city, city lights, cityscape, constellation, crescent moon, desert, photorealistic, octane render, best quality, looking at viewer, looking down, sharp focus, (8k), (4k), (Masterpiece), (Best Quality), (realistic skin texture), extremely detailed, intricate, hyper detailed, , illustration, soft lighting, , high resolution, sharp detail,, Masterpiece, best quality, sharp focus, perfect lighting, beautiful eyes, anatomically correct, anatomically correct face, extremely detailed eyes, RAW image,", "sampler_name": "dpmpp_2m", "lora_strength": 0.6, "upscale_factor": 2.8, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "Aniverse.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1", { input: { cfg: 5, lora: "cinematic/Spider_Gwen.safetensors", steps: 20, width: 360, height: 540, batch_size: 3, input_prompt: "1girl, standing, contrapposto, waist up, dynamic pose, gwen stacy, short hair, platinum blonde hair from behind, head turned, seductive expression, annoyed expression, seductively annoyed, seductive casual 1girl, aerial fireworks, american flag, astronaut, aurora, balcony, building, christmas lights, christmas tree, city, city lights, cityscape, constellation, crescent moon, desert, photorealistic, octane render, best quality, looking at viewer, looking down, sharp focus, (8k), (4k), (Masterpiece), (Best Quality), (realistic skin texture), extremely detailed, intricate, hyper detailed, , illustration, soft lighting, , high resolution, sharp detail,, Masterpiece, best quality, sharp focus, perfect lighting, beautiful eyes, anatomically correct, anatomically correct face, extremely detailed eyes, RAW image,", sampler_name: "dpmpp_2m", lora_strength: 0.6, upscale_factor: 2.8, negative_prompt: "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", checkpoint_model: "Aniverse.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1", input={ "cfg": 5, "lora": "cinematic/Spider_Gwen.safetensors", "steps": 20, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "1girl, standing, contrapposto, waist up, dynamic pose, gwen stacy, short hair, platinum blonde hair from behind, head turned, seductive expression, annoyed expression, seductively annoyed, seductive casual 1girl, aerial fireworks, american flag, astronaut, aurora, balcony, building, christmas lights, christmas tree, city, city lights, cityscape, constellation, crescent moon, desert, photorealistic, octane render, best quality, looking at viewer, looking down, sharp focus, (8k), (4k), (Masterpiece), (Best Quality), (realistic skin texture), extremely detailed, intricate, hyper detailed, , illustration, soft lighting, , high resolution, sharp detail,, Masterpiece, best quality, sharp focus, perfect lighting, beautiful eyes, anatomically correct, anatomically correct face, extremely detailed eyes, RAW image,", "sampler_name": "dpmpp_2m", "lora_strength": 0.6, "upscale_factor": 2.8, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "Aniverse.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1", "input": { "cfg": 5, "lora": "cinematic/Spider_Gwen.safetensors", "steps": 20, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "1girl, standing, contrapposto, waist up, dynamic pose, gwen stacy, short hair, platinum blonde hair from behind, head turned, seductive expression, annoyed expression, seductively annoyed, seductive casual 1girl, aerial fireworks, american flag, astronaut, aurora, balcony, building, christmas lights, christmas tree, city, city lights, cityscape, constellation, crescent moon, desert, photorealistic, octane render, best quality, looking at viewer, looking down, sharp focus, (8k), (4k), (Masterpiece), (Best Quality), (realistic skin texture), extremely detailed, intricate, hyper detailed, , illustration, soft lighting, , high resolution, sharp detail,, Masterpiece, best quality, sharp focus, perfect lighting, beautiful eyes, anatomically correct, anatomically correct face, extremely detailed eyes, RAW image,", "sampler_name": "dpmpp_2m", "lora_strength": 0.6, "upscale_factor": 2.8, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "Aniverse.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-01-29T02:47:34.004667Z", "created_at": "2024-01-29T02:46:51.021675Z", "data_removed": false, "error": null, "id": "bxor37rbbhqge3sunibube4ioe", "input": { "cfg": 5, "lora": "cinematic/Spider_Gwen.safetensors", "steps": 20, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "1girl, standing, contrapposto, waist up, dynamic pose, gwen stacy, short hair, platinum blonde hair from behind, head turned, seductive expression, annoyed expression, seductively annoyed, seductive casual 1girl, aerial fireworks, american flag, astronaut, aurora, balcony, building, christmas lights, christmas tree, city, city lights, cityscape, constellation, crescent moon, desert, photorealistic, octane render, best quality, looking at viewer, looking down, sharp focus, (8k), (4k), (Masterpiece), (Best Quality), (realistic skin texture), extremely detailed, intricate, hyper detailed, , illustration, soft lighting, , high resolution, sharp detail,, Masterpiece, best quality, sharp focus, perfect lighting, beautiful eyes, anatomically correct, anatomically correct face, extremely detailed eyes, RAW image,", "sampler_name": "dpmpp_2m", "lora_strength": 0.6, "upscale_factor": 2.8, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "Aniverse.safetensors" }, "logs": "Using seed: 979918\ngot prompt\n 0%| | 0/20 [00:00<?, ?it/s]\n 10%|█ | 2/20 [00:00<00:01, 12.80it/s]\n 20%|██ | 4/20 [00:00<00:01, 13.47it/s]\n 30%|███ | 6/20 [00:00<00:01, 13.38it/s]\n 40%|████ | 8/20 [00:00<00:00, 13.72it/s]\n 50%|█████ | 10/20 [00:00<00:00, 14.00it/s]\n 60%|██████ | 12/20 [00:00<00:00, 14.05it/s]\n 70%|███████ | 14/20 [00:01<00:00, 13.95it/s]\n 80%|████████ | 16/20 [00:01<00:00, 13.04it/s]\n 90%|█████████ | 18/20 [00:01<00:00, 13.30it/s]\n100%|██████████| 20/20 [00:01<00:00, 13.21it/s]\n100%|██████████| 20/20 [00:01<00:00, 13.45it/s]\nIterativeLatentUpscale[1/3]: 576.0x857.6 (scale:1.60) \n 0%| | 0/20 [00:00<?, ?it/s]\n 10%|█ | 2/20 [00:00<00:01, 9.89it/s]\n 15%|█▌ | 3/20 [00:00<00:01, 8.79it/s]\n 20%|██ | 4/20 [00:00<00:01, 8.26it/s]\n 25%|██▌ | 5/20 [00:00<00:01, 8.10it/s]\n 30%|███ | 6/20 [00:00<00:01, 7.95it/s]\n 35%|███▌ | 7/20 [00:00<00:01, 7.77it/s]\n 40%|████ | 8/20 [00:00<00:01, 7.75it/s]\n 45%|████▌ | 9/20 [00:01<00:01, 7.72it/s]\n 50%|█████ | 10/20 [00:01<00:01, 7.67it/s]\n 55%|█████▌ | 11/20 [00:01<00:01, 7.68it/s]\n 60%|██████ | 12/20 [00:01<00:01, 7.63it/s]\n 65%|██████▌ | 13/20 [00:01<00:00, 7.64it/s]\n 70%|███████ | 14/20 [00:01<00:00, 7.59it/s]\n 75%|███████▌ | 15/20 [00:01<00:00, 7.57it/s]\n 80%|████████ | 16/20 [00:02<00:00, 7.61it/s]\n 85%|████████▌ | 17/20 [00:02<00:00, 7.60it/s]\n 90%|█████████ | 18/20 [00:02<00:00, 7.62it/s]\n 95%|█████████▌| 19/20 [00:02<00:00, 7.49it/s]\n100%|██████████| 20/20 [00:02<00:00, 7.70it/s]\n100%|██████████| 20/20 [00:02<00:00, 7.80it/s]\nIterativeLatentUpscale[2/3]: 792.0x1179.2 (scale:2.20) \n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:01, 9.75it/s]\n 10%|█ | 2/20 [00:00<00:04, 4.07it/s]\n 15%|█▌ | 3/20 [00:00<00:04, 3.43it/s]\n 20%|██ | 4/20 [00:01<00:05, 3.20it/s]\n 25%|██▌ | 5/20 [00:01<00:04, 3.08it/s]\n 30%|███ | 6/20 [00:01<00:04, 3.01it/s]\n 35%|███▌ | 7/20 [00:02<00:04, 2.97it/s]\n 40%|████ | 8/20 [00:02<00:04, 2.94it/s]\n 45%|████▌ | 9/20 [00:02<00:03, 2.92it/s]\n 50%|█████ | 10/20 [00:03<00:03, 2.91it/s]\n 55%|█████▌ | 11/20 [00:03<00:03, 2.89it/s]\n 60%|██████ | 12/20 [00:03<00:02, 2.91it/s]\n 65%|██████▌ | 13/20 [00:04<00:02, 2.90it/s]\n 70%|███████ | 14/20 [00:04<00:02, 2.90it/s]\n 75%|███████▌ | 15/20 [00:04<00:01, 2.90it/s]\n 80%|████████ | 16/20 [00:05<00:01, 2.90it/s]\n 85%|████████▌ | 17/20 [00:05<00:01, 2.89it/s]\n 90%|█████████ | 18/20 [00:05<00:00, 2.89it/s]\n 95%|█████████▌| 19/20 [00:06<00:00, 2.89it/s]\n100%|██████████| 20/20 [00:06<00:00, 2.89it/s]\n100%|██████████| 20/20 [00:06<00:00, 3.00it/s]\nIterativeLatentUpscale[Final]: 1008.0x1500.8 (scale:2.80)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:04, 4.62it/s]\n 10%|█ | 2/20 [00:00<00:09, 1.99it/s]\n 15%|█▌ | 3/20 [00:01<00:10, 1.68it/s]\n 20%|██ | 4/20 [00:02<00:10, 1.57it/s]\n 25%|██▌ | 5/20 [00:03<00:09, 1.51it/s]\n 30%|███ | 6/20 [00:03<00:09, 1.48it/s]\n 35%|███▌ | 7/20 [00:04<00:08, 1.46it/s]\n 40%|████ | 8/20 [00:05<00:08, 1.44it/s]\n 45%|████▌ | 9/20 [00:05<00:07, 1.44it/s]\n 50%|█████ | 10/20 [00:06<00:06, 1.43it/s]\n 55%|█████▌ | 11/20 [00:07<00:06, 1.43it/s]\n 60%|██████ | 12/20 [00:07<00:05, 1.43it/s]\n 65%|██████▌ | 13/20 [00:08<00:04, 1.42it/s]\n 70%|███████ | 14/20 [00:09<00:04, 1.42it/s]\n 75%|███████▌ | 15/20 [00:10<00:03, 1.42it/s]\n 80%|████████ | 16/20 [00:10<00:02, 1.42it/s]\n 85%|████████▌ | 17/20 [00:11<00:02, 1.42it/s]\n 90%|█████████ | 18/20 [00:12<00:01, 1.42it/s]\n 95%|█████████▌| 19/20 [00:12<00:00, 1.42it/s]\n100%|██████████| 20/20 [00:13<00:00, 1.42it/s]\n100%|██████████| 20/20 [00:13<00:00, 1.47it/s]\nPrompt executed in 38.15 seconds\nnode output: {'images': [{'filename': 'ComfyUI_temp_aydoz_00004_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_aydoz_00005_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_aydoz_00006_.png', 'subfolder': '', 'type': 'temp'}]}\ntemp\ntemp\ntemp\nnode output: {'images': [{'filename': 'ComfyUI_00048_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00049_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00050_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\noutput\n3 images generated successfully", "metrics": { "predict_time": 42.967073, "total_time": 42.982992 }, "output": [ "https://replicate.delivery/pbxt/e4QsWC7i3a04D6kzsEw54R6dwWH8TOeXjQcLmaRrJmfGewEJB/out-0.png", "https://replicate.delivery/pbxt/jeaeUDHHHyuf3oy6pGijLp126yrzoO4xLb7DLXsfCGQQ8wEJB/out-1.png", "https://replicate.delivery/pbxt/GvV5luneIjR1USMyUidcytz5RSeu6LgMxNJ2OorzK4QFPMRSA/out-2.png" ], "started_at": "2024-01-29T02:46:51.037594Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/bxor37rbbhqge3sunibube4ioe", "cancel": "https://api.replicate.com/v1/predictions/bxor37rbbhqge3sunibube4ioe/cancel" }, "version": "cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1" }
Generated inUsing seed: 979918 got prompt 0%| | 0/20 [00:00<?, ?it/s] 10%|█ | 2/20 [00:00<00:01, 12.80it/s] 20%|██ | 4/20 [00:00<00:01, 13.47it/s] 30%|███ | 6/20 [00:00<00:01, 13.38it/s] 40%|████ | 8/20 [00:00<00:00, 13.72it/s] 50%|█████ | 10/20 [00:00<00:00, 14.00it/s] 60%|██████ | 12/20 [00:00<00:00, 14.05it/s] 70%|███████ | 14/20 [00:01<00:00, 13.95it/s] 80%|████████ | 16/20 [00:01<00:00, 13.04it/s] 90%|█████████ | 18/20 [00:01<00:00, 13.30it/s] 100%|██████████| 20/20 [00:01<00:00, 13.21it/s] 100%|██████████| 20/20 [00:01<00:00, 13.45it/s] IterativeLatentUpscale[1/3]: 576.0x857.6 (scale:1.60) 0%| | 0/20 [00:00<?, ?it/s] 10%|█ | 2/20 [00:00<00:01, 9.89it/s] 15%|█▌ | 3/20 [00:00<00:01, 8.79it/s] 20%|██ | 4/20 [00:00<00:01, 8.26it/s] 25%|██▌ | 5/20 [00:00<00:01, 8.10it/s] 30%|███ | 6/20 [00:00<00:01, 7.95it/s] 35%|███▌ | 7/20 [00:00<00:01, 7.77it/s] 40%|████ | 8/20 [00:00<00:01, 7.75it/s] 45%|████▌ | 9/20 [00:01<00:01, 7.72it/s] 50%|█████ | 10/20 [00:01<00:01, 7.67it/s] 55%|█████▌ | 11/20 [00:01<00:01, 7.68it/s] 60%|██████ | 12/20 [00:01<00:01, 7.63it/s] 65%|██████▌ | 13/20 [00:01<00:00, 7.64it/s] 70%|███████ | 14/20 [00:01<00:00, 7.59it/s] 75%|███████▌ | 15/20 [00:01<00:00, 7.57it/s] 80%|████████ | 16/20 [00:02<00:00, 7.61it/s] 85%|████████▌ | 17/20 [00:02<00:00, 7.60it/s] 90%|█████████ | 18/20 [00:02<00:00, 7.62it/s] 95%|█████████▌| 19/20 [00:02<00:00, 7.49it/s] 100%|██████████| 20/20 [00:02<00:00, 7.70it/s] 100%|██████████| 20/20 [00:02<00:00, 7.80it/s] IterativeLatentUpscale[2/3]: 792.0x1179.2 (scale:2.20) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:01, 9.75it/s] 10%|█ | 2/20 [00:00<00:04, 4.07it/s] 15%|█▌ | 3/20 [00:00<00:04, 3.43it/s] 20%|██ | 4/20 [00:01<00:05, 3.20it/s] 25%|██▌ | 5/20 [00:01<00:04, 3.08it/s] 30%|███ | 6/20 [00:01<00:04, 3.01it/s] 35%|███▌ | 7/20 [00:02<00:04, 2.97it/s] 40%|████ | 8/20 [00:02<00:04, 2.94it/s] 45%|████▌ | 9/20 [00:02<00:03, 2.92it/s] 50%|█████ | 10/20 [00:03<00:03, 2.91it/s] 55%|█████▌ | 11/20 [00:03<00:03, 2.89it/s] 60%|██████ | 12/20 [00:03<00:02, 2.91it/s] 65%|██████▌ | 13/20 [00:04<00:02, 2.90it/s] 70%|███████ | 14/20 [00:04<00:02, 2.90it/s] 75%|███████▌ | 15/20 [00:04<00:01, 2.90it/s] 80%|████████ | 16/20 [00:05<00:01, 2.90it/s] 85%|████████▌ | 17/20 [00:05<00:01, 2.89it/s] 90%|█████████ | 18/20 [00:05<00:00, 2.89it/s] 95%|█████████▌| 19/20 [00:06<00:00, 2.89it/s] 100%|██████████| 20/20 [00:06<00:00, 2.89it/s] 100%|██████████| 20/20 [00:06<00:00, 3.00it/s] IterativeLatentUpscale[Final]: 1008.0x1500.8 (scale:2.80) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:04, 4.62it/s] 10%|█ | 2/20 [00:00<00:09, 1.99it/s] 15%|█▌ | 3/20 [00:01<00:10, 1.68it/s] 20%|██ | 4/20 [00:02<00:10, 1.57it/s] 25%|██▌ | 5/20 [00:03<00:09, 1.51it/s] 30%|███ | 6/20 [00:03<00:09, 1.48it/s] 35%|███▌ | 7/20 [00:04<00:08, 1.46it/s] 40%|████ | 8/20 [00:05<00:08, 1.44it/s] 45%|████▌ | 9/20 [00:05<00:07, 1.44it/s] 50%|█████ | 10/20 [00:06<00:06, 1.43it/s] 55%|█████▌ | 11/20 [00:07<00:06, 1.43it/s] 60%|██████ | 12/20 [00:07<00:05, 1.43it/s] 65%|██████▌ | 13/20 [00:08<00:04, 1.42it/s] 70%|███████ | 14/20 [00:09<00:04, 1.42it/s] 75%|███████▌ | 15/20 [00:10<00:03, 1.42it/s] 80%|████████ | 16/20 [00:10<00:02, 1.42it/s] 85%|████████▌ | 17/20 [00:11<00:02, 1.42it/s] 90%|█████████ | 18/20 [00:12<00:01, 1.42it/s] 95%|█████████▌| 19/20 [00:12<00:00, 1.42it/s] 100%|██████████| 20/20 [00:13<00:00, 1.42it/s] 100%|██████████| 20/20 [00:13<00:00, 1.47it/s] Prompt executed in 38.15 seconds node output: {'images': [{'filename': 'ComfyUI_temp_aydoz_00004_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_aydoz_00005_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_aydoz_00006_.png', 'subfolder': '', 'type': 'temp'}]} temp temp temp node output: {'images': [{'filename': 'ComfyUI_00048_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00049_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00050_.png', 'subfolder': '', 'type': 'output'}]} output output output 3 images generated successfully
Prediction
bryantanjw/entropy-lol:e7ca1ebdcf9d2af7ea66291dbf9902b1eefbb7de51da8128c72fe36915d799b8IDeyuwiftbzmwvthoykk2ygo6n3aStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- cfg
- 6
- lora
- gaming/Ahri.safetensors
- seed
- 0
- steps
- 25
- width
- 512
- height
- 720
- batch_size
- 4
- custom_lora
- input_prompt
- masterpiece, (detailed, highres, best quality), 1girl, IncrsAhri, braid, fox tail, multiple tails, korean clothes, skirt, blurry, blurry background, arms behind back, seductive smile,
- sampler_name
- dpmpp_2m
- lora_strength
- 1
- negative_prompt
- (worst quality:1.4), (low quality:1.4)
- checkpoint_model
- Aniverse.safetensors
{ "cfg": 6, "lora": "gaming/Ahri.safetensors", "seed": 0, "steps": 25, "width": 512, "height": 720, "batch_size": 4, "custom_lora": "", "input_prompt": "masterpiece, (detailed, highres, best quality), 1girl, IncrsAhri, braid, fox tail, multiple tails, korean clothes, skirt, blurry, blurry background, arms behind back, seductive smile,", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "Aniverse.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:e7ca1ebdcf9d2af7ea66291dbf9902b1eefbb7de51da8128c72fe36915d799b8", { input: { cfg: 6, lora: "gaming/Ahri.safetensors", seed: 0, steps: 25, width: 512, height: 720, batch_size: 4, custom_lora: "", input_prompt: "masterpiece, (detailed, highres, best quality), 1girl, IncrsAhri, braid, fox tail, multiple tails, korean clothes, skirt, blurry, blurry background, arms behind back, seductive smile,", sampler_name: "dpmpp_2m", lora_strength: 1, negative_prompt: "(worst quality:1.4), (low quality:1.4)", checkpoint_model: "Aniverse.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:e7ca1ebdcf9d2af7ea66291dbf9902b1eefbb7de51da8128c72fe36915d799b8", input={ "cfg": 6, "lora": "gaming/Ahri.safetensors", "seed": 0, "steps": 25, "width": 512, "height": 720, "batch_size": 4, "custom_lora": "", "input_prompt": "masterpiece, (detailed, highres, best quality), 1girl, IncrsAhri, braid, fox tail, multiple tails, korean clothes, skirt, blurry, blurry background, arms behind back, seductive smile,", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "Aniverse.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:e7ca1ebdcf9d2af7ea66291dbf9902b1eefbb7de51da8128c72fe36915d799b8", "input": { "cfg": 6, "lora": "gaming/Ahri.safetensors", "seed": 0, "steps": 25, "width": 512, "height": 720, "batch_size": 4, "custom_lora": "", "input_prompt": "masterpiece, (detailed, highres, best quality), 1girl, IncrsAhri, braid, fox tail, multiple tails, korean clothes, skirt, blurry, blurry background, arms behind back, seductive smile,", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "Aniverse.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-01-28T08:33:16.101056Z", "created_at": "2024-01-28T08:19:27.031954Z", "data_removed": false, "error": null, "id": "eyuwiftbzmwvthoykk2ygo6n3a", "input": { "cfg": 6, "lora": "gaming/Ahri.safetensors", "seed": 0, "steps": 25, "width": 512, "height": 720, "batch_size": 4, "custom_lora": "", "input_prompt": "masterpiece, (detailed, highres, best quality), 1girl, IncrsAhri, braid, fox tail, multiple tails, korean clothes, skirt, blurry, blurry background, arms behind back, seductive smile,", "sampler_name": "dpmpp_2m", "lora_strength": 1, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "Aniverse.safetensors" }, "logs": "Using seed: 4693380\ngot prompt\nmodel_type EPS\nadm 0\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nmissing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}\nleft over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nLeftover VAE keys ['model_ema.decay', 'model_ema.num_updates']\nRequested to load SD1ClipModel\nLoading 1 new model\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/25 [00:00<?, ?it/s]\n 4%|▍ | 1/25 [00:00<00:14, 1.64it/s]\n 8%|▊ | 2/25 [00:00<00:08, 2.76it/s]\n 12%|█▏ | 3/25 [00:00<00:06, 3.55it/s]\n 16%|█▌ | 4/25 [00:01<00:05, 4.11it/s]\n 20%|██ | 5/25 [00:01<00:04, 4.50it/s]\n 24%|██▍ | 6/25 [00:01<00:04, 4.70it/s]\n 28%|██▊ | 7/25 [00:01<00:03, 4.90it/s]\n 32%|███▏ | 8/25 [00:01<00:03, 5.04it/s]\n 36%|███▌ | 9/25 [00:02<00:03, 5.14it/s]\n 40%|████ | 10/25 [00:02<00:02, 5.22it/s]\n 44%|████▍ | 11/25 [00:02<00:02, 5.26it/s]\n 48%|████▊ | 12/25 [00:02<00:02, 5.28it/s]\n 52%|█████▏ | 13/25 [00:02<00:02, 5.32it/s]\n 56%|█████▌ | 14/25 [00:03<00:02, 5.34it/s]\n 60%|██████ | 15/25 [00:03<00:01, 5.34it/s]\n 64%|██████▍ | 16/25 [00:03<00:01, 5.35it/s]\n 68%|██████▊ | 17/25 [00:03<00:01, 5.35it/s]\n 72%|███████▏ | 18/25 [00:03<00:01, 5.35it/s]\n 76%|███████▌ | 19/25 [00:03<00:01, 5.36it/s]\n 80%|████████ | 20/25 [00:04<00:00, 5.36it/s]\n 84%|████████▍ | 21/25 [00:04<00:00, 5.32it/s]\n 88%|████████▊ | 22/25 [00:04<00:00, 5.34it/s]\n 92%|█████████▏| 23/25 [00:05<00:00, 2.71it/s]\n 96%|█████████▌| 24/25 [00:06<00:00, 2.06it/s]\n100%|██████████| 25/25 [00:06<00:00, 2.47it/s]\n100%|██████████| 25/25 [00:06<00:00, 3.97it/s]\nRequested to load AutoencoderKL\nLoading 1 new model\nIterativeLatentUpscale[1/3]: 853.3x1200.0 (scale:1.67)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:14, 1.33it/s]\n 10%|█ | 2/20 [00:00<00:07, 2.38it/s]\n 15%|█▌ | 3/20 [00:01<00:09, 1.71it/s]\n 20%|██ | 4/20 [00:02<00:11, 1.35it/s]\n 25%|██▌ | 5/20 [00:03<00:12, 1.22it/s]\n 30%|███ | 6/20 [00:04<00:11, 1.21it/s]\n 35%|███▌ | 7/20 [00:05<00:11, 1.15it/s]\n 40%|████ | 8/20 [00:06<00:10, 1.11it/s]\n 45%|████▌ | 9/20 [00:07<00:10, 1.08it/s]\n 50%|█████ | 10/20 [00:08<00:08, 1.16it/s]\n 55%|█████▌ | 11/20 [00:08<00:07, 1.21it/s]\n 60%|██████ | 12/20 [00:09<00:06, 1.26it/s]\n 65%|██████▌ | 13/20 [00:10<00:06, 1.07it/s]\n 70%|███████ | 14/20 [00:11<00:05, 1.05it/s]\n 75%|███████▌ | 15/20 [00:12<00:04, 1.04it/s]\n 80%|████████ | 16/20 [00:13<00:03, 1.03it/s]\n 85%|████████▌ | 17/20 [00:14<00:02, 1.07it/s]\n 90%|█████████ | 18/20 [00:15<00:01, 1.13it/s]\n 95%|█████████▌| 19/20 [00:16<00:00, 1.09it/s]\n100%|██████████| 20/20 [00:17<00:00, 1.06it/s]\n100%|██████████| 20/20 [00:17<00:00, 1.15it/s]\nIterativeLatentUpscale[2/3]: 1194.7x1680.0 (scale:2.33)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:11, 1.67it/s]\n 10%|█ | 2/20 [00:02<00:28, 1.59s/it]\n 15%|█▌ | 3/20 [00:05<00:32, 1.91s/it]\n 20%|██ | 4/20 [00:07<00:32, 2.06s/it]\n 25%|██▌ | 5/20 [00:09<00:32, 2.14s/it]\n 30%|███ | 6/20 [00:12<00:30, 2.19s/it]\n 35%|███▌ | 7/20 [00:14<00:28, 2.22s/it]\n 40%|████ | 8/20 [00:16<00:26, 2.24s/it]\n 45%|████▌ | 9/20 [00:18<00:24, 2.26s/it]\n 50%|█████ | 10/20 [00:21<00:22, 2.27s/it]\n 55%|█████▌ | 11/20 [00:23<00:20, 2.27s/it]\n 60%|██████ | 12/20 [00:25<00:18, 2.28s/it]\n 65%|██████▌ | 13/20 [00:28<00:15, 2.28s/it]\n 70%|███████ | 14/20 [00:30<00:13, 2.28s/it]\n 75%|███████▌ | 15/20 [00:32<00:11, 2.29s/it]\n 80%|████████ | 16/20 [00:34<00:09, 2.29s/it]\n 85%|████████▌ | 17/20 [00:37<00:06, 2.29s/it]\n 90%|█████████ | 18/20 [00:39<00:04, 2.29s/it]\n 95%|█████████▌| 19/20 [00:41<00:02, 2.33s/it]\n100%|██████████| 20/20 [00:44<00:00, 2.32s/it]\n100%|██████████| 20/20 [00:44<00:00, 2.21s/it]\nIterativeLatentUpscale[Final]: 1536.0x2160.0 (scale:3.00)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:01<00:28, 1.51s/it]\n 10%|█ | 2/20 [00:06<01:05, 3.64s/it]\n 15%|█▌ | 3/20 [00:11<01:13, 4.32s/it]\n 20%|██ | 4/20 [00:16<01:14, 4.65s/it]\n 25%|██▌ | 5/20 [00:22<01:12, 4.82s/it]\n 30%|███ | 6/20 [00:27<01:09, 4.93s/it]\n 35%|███▌ | 7/20 [00:32<01:04, 5.00s/it]\n 40%|████ | 8/20 [00:37<01:00, 5.05s/it]\n 45%|████▌ | 9/20 [00:42<00:55, 5.08s/it]\n 50%|█████ | 10/20 [00:47<00:50, 5.10s/it]\n 55%|█████▌ | 11/20 [00:52<00:45, 5.11s/it]\n 60%|██████ | 12/20 [00:58<00:40, 5.12s/it]\n 65%|██████▌ | 13/20 [01:03<00:35, 5.13s/it]\n 70%|███████ | 14/20 [01:08<00:30, 5.13s/it]\n 75%|███████▌ | 15/20 [01:13<00:25, 5.13s/it]\n 80%|████████ | 16/20 [01:18<00:20, 5.14s/it]\n 85%|████████▌ | 17/20 [01:23<00:15, 5.14s/it]\n 90%|█████████ | 18/20 [01:28<00:10, 5.14s/it]\n 95%|█████████▌| 19/20 [01:34<00:05, 5.14s/it]\n100%|██████████| 20/20 [01:39<00:00, 5.14s/it]\n100%|██████████| 20/20 [01:39<00:00, 4.96s/it]\nPrompt executed in 234.25 seconds\nnode output: {'images': [{'filename': 'ComfyUI_temp_qbeir_00001_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_qbeir_00002_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_qbeir_00003_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_qbeir_00004_.png', 'subfolder': '', 'type': 'temp'}]}\ntemp\ntemp\ntemp\ntemp\nnode output: {'images': [{'filename': 'ComfyUI_00045_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00046_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00047_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00048_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\noutput\noutput\n4 images generated successfully", "metrics": { "predict_time": 243.86861, "total_time": 829.069102 }, "output": [ "https://replicate.delivery/pbxt/efWekY9BeaDOjTQOl7qv8WTmdoMGXG9E4ejvLEXpdE2IphHSC/out-0.png", "https://replicate.delivery/pbxt/DSTCfJXBalUGCSQnvag2iHSv9s5zAKYWamsR3mNDfFVKN8QSA/out-1.png", "https://replicate.delivery/pbxt/SAfHeHCOklkOPk7iTwgH5oZMSxJpoPfLSQCB9cHkqe6v0wDJB/out-2.png", "https://replicate.delivery/pbxt/CtJxUT3AcRbSHp6MzjOQjDOb2A9BCpDYtiQFIqd11L6SDPkE/out-3.png" ], "started_at": "2024-01-28T08:29:12.232446Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/eyuwiftbzmwvthoykk2ygo6n3a", "cancel": "https://api.replicate.com/v1/predictions/eyuwiftbzmwvthoykk2ygo6n3a/cancel" }, "version": "e7ca1ebdcf9d2af7ea66291dbf9902b1eefbb7de51da8128c72fe36915d799b8" }
Generated inUsing seed: 4693380 got prompt model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE Leftover VAE keys ['model_ema.decay', 'model_ema.num_updates'] Requested to load SD1ClipModel Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/25 [00:00<?, ?it/s] 4%|▍ | 1/25 [00:00<00:14, 1.64it/s] 8%|▊ | 2/25 [00:00<00:08, 2.76it/s] 12%|█▏ | 3/25 [00:00<00:06, 3.55it/s] 16%|█▌ | 4/25 [00:01<00:05, 4.11it/s] 20%|██ | 5/25 [00:01<00:04, 4.50it/s] 24%|██▍ | 6/25 [00:01<00:04, 4.70it/s] 28%|██▊ | 7/25 [00:01<00:03, 4.90it/s] 32%|███▏ | 8/25 [00:01<00:03, 5.04it/s] 36%|███▌ | 9/25 [00:02<00:03, 5.14it/s] 40%|████ | 10/25 [00:02<00:02, 5.22it/s] 44%|████▍ | 11/25 [00:02<00:02, 5.26it/s] 48%|████▊ | 12/25 [00:02<00:02, 5.28it/s] 52%|█████▏ | 13/25 [00:02<00:02, 5.32it/s] 56%|█████▌ | 14/25 [00:03<00:02, 5.34it/s] 60%|██████ | 15/25 [00:03<00:01, 5.34it/s] 64%|██████▍ | 16/25 [00:03<00:01, 5.35it/s] 68%|██████▊ | 17/25 [00:03<00:01, 5.35it/s] 72%|███████▏ | 18/25 [00:03<00:01, 5.35it/s] 76%|███████▌ | 19/25 [00:03<00:01, 5.36it/s] 80%|████████ | 20/25 [00:04<00:00, 5.36it/s] 84%|████████▍ | 21/25 [00:04<00:00, 5.32it/s] 88%|████████▊ | 22/25 [00:04<00:00, 5.34it/s] 92%|█████████▏| 23/25 [00:05<00:00, 2.71it/s] 96%|█████████▌| 24/25 [00:06<00:00, 2.06it/s] 100%|██████████| 25/25 [00:06<00:00, 2.47it/s] 100%|██████████| 25/25 [00:06<00:00, 3.97it/s] Requested to load AutoencoderKL Loading 1 new model IterativeLatentUpscale[1/3]: 853.3x1200.0 (scale:1.67) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:14, 1.33it/s] 10%|█ | 2/20 [00:00<00:07, 2.38it/s] 15%|█▌ | 3/20 [00:01<00:09, 1.71it/s] 20%|██ | 4/20 [00:02<00:11, 1.35it/s] 25%|██▌ | 5/20 [00:03<00:12, 1.22it/s] 30%|███ | 6/20 [00:04<00:11, 1.21it/s] 35%|███▌ | 7/20 [00:05<00:11, 1.15it/s] 40%|████ | 8/20 [00:06<00:10, 1.11it/s] 45%|████▌ | 9/20 [00:07<00:10, 1.08it/s] 50%|█████ | 10/20 [00:08<00:08, 1.16it/s] 55%|█████▌ | 11/20 [00:08<00:07, 1.21it/s] 60%|██████ | 12/20 [00:09<00:06, 1.26it/s] 65%|██████▌ | 13/20 [00:10<00:06, 1.07it/s] 70%|███████ | 14/20 [00:11<00:05, 1.05it/s] 75%|███████▌ | 15/20 [00:12<00:04, 1.04it/s] 80%|████████ | 16/20 [00:13<00:03, 1.03it/s] 85%|████████▌ | 17/20 [00:14<00:02, 1.07it/s] 90%|█████████ | 18/20 [00:15<00:01, 1.13it/s] 95%|█████████▌| 19/20 [00:16<00:00, 1.09it/s] 100%|██████████| 20/20 [00:17<00:00, 1.06it/s] 100%|██████████| 20/20 [00:17<00:00, 1.15it/s] IterativeLatentUpscale[2/3]: 1194.7x1680.0 (scale:2.33) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:11, 1.67it/s] 10%|█ | 2/20 [00:02<00:28, 1.59s/it] 15%|█▌ | 3/20 [00:05<00:32, 1.91s/it] 20%|██ | 4/20 [00:07<00:32, 2.06s/it] 25%|██▌ | 5/20 [00:09<00:32, 2.14s/it] 30%|███ | 6/20 [00:12<00:30, 2.19s/it] 35%|███▌ | 7/20 [00:14<00:28, 2.22s/it] 40%|████ | 8/20 [00:16<00:26, 2.24s/it] 45%|████▌ | 9/20 [00:18<00:24, 2.26s/it] 50%|█████ | 10/20 [00:21<00:22, 2.27s/it] 55%|█████▌ | 11/20 [00:23<00:20, 2.27s/it] 60%|██████ | 12/20 [00:25<00:18, 2.28s/it] 65%|██████▌ | 13/20 [00:28<00:15, 2.28s/it] 70%|███████ | 14/20 [00:30<00:13, 2.28s/it] 75%|███████▌ | 15/20 [00:32<00:11, 2.29s/it] 80%|████████ | 16/20 [00:34<00:09, 2.29s/it] 85%|████████▌ | 17/20 [00:37<00:06, 2.29s/it] 90%|█████████ | 18/20 [00:39<00:04, 2.29s/it] 95%|█████████▌| 19/20 [00:41<00:02, 2.33s/it] 100%|██████████| 20/20 [00:44<00:00, 2.32s/it] 100%|██████████| 20/20 [00:44<00:00, 2.21s/it] IterativeLatentUpscale[Final]: 1536.0x2160.0 (scale:3.00) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:01<00:28, 1.51s/it] 10%|█ | 2/20 [00:06<01:05, 3.64s/it] 15%|█▌ | 3/20 [00:11<01:13, 4.32s/it] 20%|██ | 4/20 [00:16<01:14, 4.65s/it] 25%|██▌ | 5/20 [00:22<01:12, 4.82s/it] 30%|███ | 6/20 [00:27<01:09, 4.93s/it] 35%|███▌ | 7/20 [00:32<01:04, 5.00s/it] 40%|████ | 8/20 [00:37<01:00, 5.05s/it] 45%|████▌ | 9/20 [00:42<00:55, 5.08s/it] 50%|█████ | 10/20 [00:47<00:50, 5.10s/it] 55%|█████▌ | 11/20 [00:52<00:45, 5.11s/it] 60%|██████ | 12/20 [00:58<00:40, 5.12s/it] 65%|██████▌ | 13/20 [01:03<00:35, 5.13s/it] 70%|███████ | 14/20 [01:08<00:30, 5.13s/it] 75%|███████▌ | 15/20 [01:13<00:25, 5.13s/it] 80%|████████ | 16/20 [01:18<00:20, 5.14s/it] 85%|████████▌ | 17/20 [01:23<00:15, 5.14s/it] 90%|█████████ | 18/20 [01:28<00:10, 5.14s/it] 95%|█████████▌| 19/20 [01:34<00:05, 5.14s/it] 100%|██████████| 20/20 [01:39<00:00, 5.14s/it] 100%|██████████| 20/20 [01:39<00:00, 4.96s/it] Prompt executed in 234.25 seconds node output: {'images': [{'filename': 'ComfyUI_temp_qbeir_00001_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_qbeir_00002_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_qbeir_00003_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_qbeir_00004_.png', 'subfolder': '', 'type': 'temp'}]} temp temp temp temp node output: {'images': [{'filename': 'ComfyUI_00045_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00046_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00047_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00048_.png', 'subfolder': '', 'type': 'output'}]} output output output output 4 images generated successfully
Prediction
bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9IDnnaxucdb3iqjcqjvywos2mcfsuStatusSucceededSourceWebHardwareA100 (40GB)Total durationCreatedInput
- cfg
- 7
- lora
- anime/Lucy_Cyberpunk.safetensors
- steps
- 20
- width
- 360
- height
- 540
- batch_size
- 3
- input_prompt
- masterpiece, best quality, highres, lu1, cyborg, multicolored hair, makeup, bare shoulders, black leotard, highleg leotard, (thong:1.1), white jacket, open jacket, belt, shorts, cowboy shot, planet, moon, earth (planet)
- sampler_name
- dpmpp_2m
- lora_strength
- 1
- upscale_factor
- 3
- negative_prompt
- (worst quality:1.4), (low quality:1.4), simple background, bad anatomy
- checkpoint_model
- Pastel.safetensors
{ "cfg": 7, "lora": "anime/Lucy_Cyberpunk.safetensors", "steps": 20, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "masterpiece, best quality, highres, lu1, cyborg, multicolored hair, makeup, bare shoulders, black leotard, highleg leotard, (thong:1.1), white jacket, open jacket, belt, shorts, cowboy shot, planet, moon, earth (planet)", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "Pastel.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9", { input: { cfg: 7, lora: "anime/Lucy_Cyberpunk.safetensors", steps: 20, width: 360, height: 540, batch_size: 3, input_prompt: "masterpiece, best quality, highres, lu1, cyborg, multicolored hair, makeup, bare shoulders, black leotard, highleg leotard, (thong:1.1), white jacket, open jacket, belt, shorts, cowboy shot, planet, moon, earth (planet)", sampler_name: "dpmpp_2m", lora_strength: 1, upscale_factor: 3, negative_prompt: "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", checkpoint_model: "Pastel.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9", input={ "cfg": 7, "lora": "anime/Lucy_Cyberpunk.safetensors", "steps": 20, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "masterpiece, best quality, highres, lu1, cyborg, multicolored hair, makeup, bare shoulders, black leotard, highleg leotard, (thong:1.1), white jacket, open jacket, belt, shorts, cowboy shot, planet, moon, earth (planet)", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "Pastel.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9", "input": { "cfg": 7, "lora": "anime/Lucy_Cyberpunk.safetensors", "steps": 20, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "masterpiece, best quality, highres, lu1, cyborg, multicolored hair, makeup, bare shoulders, black leotard, highleg leotard, (thong:1.1), white jacket, open jacket, belt, shorts, cowboy shot, planet, moon, earth (planet)", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "Pastel.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-01-28T23:32:29.210600Z", "created_at": "2024-01-28T23:23:23.423045Z", "data_removed": false, "error": null, "id": "nnaxucdb3iqjcqjvywos2mcfsu", "input": { "cfg": 7, "lora": "anime/Lucy_Cyberpunk.safetensors", "steps": 20, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "masterpiece, best quality, highres, lu1, cyborg, multicolored hair, makeup, bare shoulders, black leotard, highleg leotard, (thong:1.1), white jacket, open jacket, belt, shorts, cowboy shot, planet, moon, earth (planet)", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "Pastel.safetensors" }, "logs": "Using seed: 4871338\ngot prompt\nmodel_type EPS\nadm 0\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nmissing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}\nleft over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])\nRequested to load SD1ClipModel\nLoading 1 new model\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:02, 6.84it/s]\n 20%|██ | 4/20 [00:00<00:01, 14.79it/s]\n 35%|███▌ | 7/20 [00:00<00:00, 17.45it/s]\n 50%|█████ | 10/20 [00:00<00:00, 18.71it/s]\n 65%|██████▌ | 13/20 [00:00<00:00, 19.42it/s]\n 75%|███████▌ | 15/20 [00:00<00:00, 19.36it/s]\n 90%|█████████ | 18/20 [00:00<00:00, 19.80it/s]\n100%|██████████| 20/20 [00:01<00:00, 18.56it/s]\nIterativeLatentUpscale[1/3]: 600.0x893.3 (scale:1.67)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:02, 7.13it/s]\n 15%|█▌ | 3/20 [00:00<00:02, 7.68it/s]\n 20%|██ | 4/20 [00:00<00:02, 7.08it/s]\n 25%|██▌ | 5/20 [00:00<00:02, 6.78it/s]\n 30%|███ | 6/20 [00:00<00:02, 6.59it/s]\n 35%|███▌ | 7/20 [00:01<00:02, 6.47it/s]\n 40%|████ | 8/20 [00:01<00:01, 6.38it/s]\n 45%|████▌ | 9/20 [00:01<00:01, 6.33it/s]\n 50%|█████ | 10/20 [00:01<00:01, 6.30it/s]\n 55%|█████▌ | 11/20 [00:01<00:01, 6.28it/s]\n 60%|██████ | 12/20 [00:01<00:01, 6.26it/s]\n 65%|██████▌ | 13/20 [00:02<00:01, 6.24it/s]\n 70%|███████ | 14/20 [00:02<00:00, 6.21it/s]\n 75%|███████▌ | 15/20 [00:02<00:00, 6.20it/s]\n 80%|████████ | 16/20 [00:02<00:00, 6.20it/s]\n 85%|████████▌ | 17/20 [00:02<00:00, 6.20it/s]\n 90%|█████████ | 18/20 [00:02<00:00, 6.20it/s]\n 95%|█████████▌| 19/20 [00:02<00:00, 6.19it/s]\n100%|██████████| 20/20 [00:03<00:00, 6.19it/s]\n100%|██████████| 20/20 [00:03<00:00, 6.38it/s]\nIterativeLatentUpscale[2/3]: 840.0x1250.7 (scale:2.33)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:02, 7.10it/s]\n 10%|█ | 2/20 [00:00<00:04, 3.93it/s]\n 15%|█▌ | 3/20 [00:00<00:05, 3.32it/s]\n 20%|██ | 4/20 [00:01<00:05, 3.04it/s]\n 25%|██▌ | 5/20 [00:01<00:05, 2.95it/s]\n 30%|███ | 6/20 [00:01<00:04, 2.89it/s]\n 35%|███▌ | 7/20 [00:02<00:04, 2.86it/s]\n 40%|████ | 8/20 [00:02<00:04, 2.84it/s]\n 45%|████▌ | 9/20 [00:02<00:03, 2.82it/s]\n 50%|█████ | 10/20 [00:03<00:03, 2.81it/s]\n 55%|█████▌ | 11/20 [00:03<00:03, 2.80it/s]\n 60%|██████ | 12/20 [00:04<00:02, 2.80it/s]\n 65%|██████▌ | 13/20 [00:04<00:02, 2.80it/s]\n 70%|███████ | 14/20 [00:04<00:02, 2.80it/s]\n 75%|███████▌ | 15/20 [00:05<00:01, 2.79it/s]\n 80%|████████ | 16/20 [00:05<00:01, 2.79it/s]\n 85%|████████▌ | 17/20 [00:05<00:01, 2.79it/s]\n 90%|█████████ | 18/20 [00:06<00:00, 2.76it/s]\n 95%|█████████▌| 19/20 [00:06<00:00, 2.77it/s]\n100%|██████████| 20/20 [00:06<00:00, 2.78it/s]\n100%|██████████| 20/20 [00:06<00:00, 2.88it/s]\nIterativeLatentUpscale[Final]: 1080.0x1608.0 (scale:3.00)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:05, 3.65it/s]\n 10%|█ | 2/20 [00:01<00:11, 1.57it/s]\n 15%|█▌ | 3/20 [00:02<00:12, 1.32it/s]\n 20%|██ | 4/20 [00:02<00:12, 1.23it/s]\n 25%|██▌ | 5/20 [00:03<00:12, 1.19it/s]\n 30%|███ | 6/20 [00:04<00:12, 1.16it/s]\n 35%|███▌ | 7/20 [00:05<00:11, 1.15it/s]\n 40%|████ | 8/20 [00:06<00:10, 1.14it/s]\n 45%|████▌ | 9/20 [00:07<00:09, 1.13it/s]\n 50%|█████ | 10/20 [00:08<00:08, 1.12it/s]\n 55%|█████▌ | 11/20 [00:09<00:08, 1.12it/s]\n 60%|██████ | 12/20 [00:10<00:07, 1.12it/s]\n 65%|██████▌ | 13/20 [00:11<00:06, 1.12it/s]\n 70%|███████ | 14/20 [00:11<00:05, 1.12it/s]\n 75%|███████▌ | 15/20 [00:12<00:04, 1.12it/s]\n 80%|████████ | 16/20 [00:13<00:03, 1.12it/s]\n 85%|████████▌ | 17/20 [00:14<00:02, 1.12it/s]\n 90%|█████████ | 18/20 [00:15<00:01, 1.12it/s]\n 95%|█████████▌| 19/20 [00:16<00:00, 1.12it/s]\n100%|██████████| 20/20 [00:17<00:00, 1.12it/s]\n100%|██████████| 20/20 [00:17<00:00, 1.16it/s]\nPrompt executed in 42.57 seconds\nnode output: {'images': [{'filename': 'ComfyUI_temp_ftyuz_00003_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_ftyuz_00004_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_ftyuz_00005_.png', 'subfolder': '', 'type': 'temp'}]}\ntemp\ntemp\ntemp\nnode output: {'images': [{'filename': 'ComfyUI_00047_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00048_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00049_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\noutput\n3 images generated successfully", "metrics": { "predict_time": 47.856598, "total_time": 545.787555 }, "output": [ "https://replicate.delivery/pbxt/VAZnc6KzyQJfCKRCucHF7IdksPUyZuPCBiDkbQ8ldLHFskIJA/out-0.png", "https://replicate.delivery/pbxt/SheM72olxwzuPitVH6XS81JeGONenFvLtH7lkpLoeExtglEJB/out-1.png", "https://replicate.delivery/pbxt/cykkNukhCBbTKNDCKt3bEX9wluvwZq2f3gFPyaA5aoYGskIJA/out-2.png" ], "started_at": "2024-01-28T23:31:41.354002Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/nnaxucdb3iqjcqjvywos2mcfsu", "cancel": "https://api.replicate.com/v1/predictions/nnaxucdb3iqjcqjvywos2mcfsu/cancel" }, "version": "58d9854c027600b70873c7eebc949a1b3ef8aa9eb37269592b5feb30e6bed1e9" }
Generated inUsing seed: 4871338 got prompt model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Requested to load SD1ClipModel Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:02, 6.84it/s] 20%|██ | 4/20 [00:00<00:01, 14.79it/s] 35%|███▌ | 7/20 [00:00<00:00, 17.45it/s] 50%|█████ | 10/20 [00:00<00:00, 18.71it/s] 65%|██████▌ | 13/20 [00:00<00:00, 19.42it/s] 75%|███████▌ | 15/20 [00:00<00:00, 19.36it/s] 90%|█████████ | 18/20 [00:00<00:00, 19.80it/s] 100%|██████████| 20/20 [00:01<00:00, 18.56it/s] IterativeLatentUpscale[1/3]: 600.0x893.3 (scale:1.67) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:02, 7.13it/s] 15%|█▌ | 3/20 [00:00<00:02, 7.68it/s] 20%|██ | 4/20 [00:00<00:02, 7.08it/s] 25%|██▌ | 5/20 [00:00<00:02, 6.78it/s] 30%|███ | 6/20 [00:00<00:02, 6.59it/s] 35%|███▌ | 7/20 [00:01<00:02, 6.47it/s] 40%|████ | 8/20 [00:01<00:01, 6.38it/s] 45%|████▌ | 9/20 [00:01<00:01, 6.33it/s] 50%|█████ | 10/20 [00:01<00:01, 6.30it/s] 55%|█████▌ | 11/20 [00:01<00:01, 6.28it/s] 60%|██████ | 12/20 [00:01<00:01, 6.26it/s] 65%|██████▌ | 13/20 [00:02<00:01, 6.24it/s] 70%|███████ | 14/20 [00:02<00:00, 6.21it/s] 75%|███████▌ | 15/20 [00:02<00:00, 6.20it/s] 80%|████████ | 16/20 [00:02<00:00, 6.20it/s] 85%|████████▌ | 17/20 [00:02<00:00, 6.20it/s] 90%|█████████ | 18/20 [00:02<00:00, 6.20it/s] 95%|█████████▌| 19/20 [00:02<00:00, 6.19it/s] 100%|██████████| 20/20 [00:03<00:00, 6.19it/s] 100%|██████████| 20/20 [00:03<00:00, 6.38it/s] IterativeLatentUpscale[2/3]: 840.0x1250.7 (scale:2.33) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:02, 7.10it/s] 10%|█ | 2/20 [00:00<00:04, 3.93it/s] 15%|█▌ | 3/20 [00:00<00:05, 3.32it/s] 20%|██ | 4/20 [00:01<00:05, 3.04it/s] 25%|██▌ | 5/20 [00:01<00:05, 2.95it/s] 30%|███ | 6/20 [00:01<00:04, 2.89it/s] 35%|███▌ | 7/20 [00:02<00:04, 2.86it/s] 40%|████ | 8/20 [00:02<00:04, 2.84it/s] 45%|████▌ | 9/20 [00:02<00:03, 2.82it/s] 50%|█████ | 10/20 [00:03<00:03, 2.81it/s] 55%|█████▌ | 11/20 [00:03<00:03, 2.80it/s] 60%|██████ | 12/20 [00:04<00:02, 2.80it/s] 65%|██████▌ | 13/20 [00:04<00:02, 2.80it/s] 70%|███████ | 14/20 [00:04<00:02, 2.80it/s] 75%|███████▌ | 15/20 [00:05<00:01, 2.79it/s] 80%|████████ | 16/20 [00:05<00:01, 2.79it/s] 85%|████████▌ | 17/20 [00:05<00:01, 2.79it/s] 90%|█████████ | 18/20 [00:06<00:00, 2.76it/s] 95%|█████████▌| 19/20 [00:06<00:00, 2.77it/s] 100%|██████████| 20/20 [00:06<00:00, 2.78it/s] 100%|██████████| 20/20 [00:06<00:00, 2.88it/s] IterativeLatentUpscale[Final]: 1080.0x1608.0 (scale:3.00) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:05, 3.65it/s] 10%|█ | 2/20 [00:01<00:11, 1.57it/s] 15%|█▌ | 3/20 [00:02<00:12, 1.32it/s] 20%|██ | 4/20 [00:02<00:12, 1.23it/s] 25%|██▌ | 5/20 [00:03<00:12, 1.19it/s] 30%|███ | 6/20 [00:04<00:12, 1.16it/s] 35%|███▌ | 7/20 [00:05<00:11, 1.15it/s] 40%|████ | 8/20 [00:06<00:10, 1.14it/s] 45%|████▌ | 9/20 [00:07<00:09, 1.13it/s] 50%|█████ | 10/20 [00:08<00:08, 1.12it/s] 55%|█████▌ | 11/20 [00:09<00:08, 1.12it/s] 60%|██████ | 12/20 [00:10<00:07, 1.12it/s] 65%|██████▌ | 13/20 [00:11<00:06, 1.12it/s] 70%|███████ | 14/20 [00:11<00:05, 1.12it/s] 75%|███████▌ | 15/20 [00:12<00:04, 1.12it/s] 80%|████████ | 16/20 [00:13<00:03, 1.12it/s] 85%|████████▌ | 17/20 [00:14<00:02, 1.12it/s] 90%|█████████ | 18/20 [00:15<00:01, 1.12it/s] 95%|█████████▌| 19/20 [00:16<00:00, 1.12it/s] 100%|██████████| 20/20 [00:17<00:00, 1.12it/s] 100%|██████████| 20/20 [00:17<00:00, 1.16it/s] Prompt executed in 42.57 seconds node output: {'images': [{'filename': 'ComfyUI_temp_ftyuz_00003_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_ftyuz_00004_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_ftyuz_00005_.png', 'subfolder': '', 'type': 'temp'}]} temp temp temp node output: {'images': [{'filename': 'ComfyUI_00047_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00048_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00049_.png', 'subfolder': '', 'type': 'output'}]} output output output 3 images generated successfully
Prediction
bryantanjw/entropy-lol:cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1IDelyextbbrmmcc4ehtzgtlvlp7aStatusSucceededSourceWebHardwareA100 (40GB)Total durationCreatedInput
- cfg
- 7
- lora
- gaming/Soul_Fighter_Gwen.safetensors
- steps
- 20
- width
- 360
- height
- 540
- batch_size
- 3
- input_prompt
- (detailed arena background), masterpiece, best quality, Soul_Fighter_Gwen, multicolored_eyes, heterochromia, ,twin_drills ringlets, long_hair, closed mouth, tears, wavy mouth, tearing up, pout, meme, parody, frown, v-shaped eyebrows, crying, sad, crying with eyes open, puffy cheeks
- sampler_name
- dpmpp_2m
- lora_strength
- 1
- upscale_factor
- 2.8
- negative_prompt
- (worst quality:1.4), (low quality:1.4), simple background, unprofessional anatomy, unprofessional fingers
- checkpoint_model
- MeinaAlter.safetensors
{ "cfg": 7, "lora": "gaming/Soul_Fighter_Gwen.safetensors", "steps": 20, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "(detailed arena background), masterpiece, best quality, Soul_Fighter_Gwen, multicolored_eyes, heterochromia, ,twin_drills ringlets, long_hair, closed mouth, tears, wavy mouth, tearing up, pout, meme, parody, frown, v-shaped eyebrows, crying, sad, crying with eyes open, puffy cheeks", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 2.8, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, unprofessional anatomy, unprofessional fingers", "checkpoint_model": "MeinaAlter.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1", { input: { cfg: 7, lora: "gaming/Soul_Fighter_Gwen.safetensors", steps: 20, width: 360, height: 540, batch_size: 3, input_prompt: "(detailed arena background), masterpiece, best quality, Soul_Fighter_Gwen, multicolored_eyes, heterochromia, ,twin_drills ringlets, long_hair, closed mouth, tears, wavy mouth, tearing up, pout, meme, parody, frown, v-shaped eyebrows, crying, sad, crying with eyes open, puffy cheeks", sampler_name: "dpmpp_2m", lora_strength: 1, upscale_factor: 2.8, negative_prompt: "(worst quality:1.4), (low quality:1.4), simple background, unprofessional anatomy, unprofessional fingers", checkpoint_model: "MeinaAlter.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1", input={ "cfg": 7, "lora": "gaming/Soul_Fighter_Gwen.safetensors", "steps": 20, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "(detailed arena background), masterpiece, best quality, Soul_Fighter_Gwen, multicolored_eyes, heterochromia, ,twin_drills ringlets, long_hair, closed mouth, tears, wavy mouth, tearing up, pout, meme, parody, frown, v-shaped eyebrows, crying, sad, crying with eyes open, puffy cheeks", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 2.8, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, unprofessional anatomy, unprofessional fingers", "checkpoint_model": "MeinaAlter.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1", "input": { "cfg": 7, "lora": "gaming/Soul_Fighter_Gwen.safetensors", "steps": 20, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "(detailed arena background), masterpiece, best quality, Soul_Fighter_Gwen, multicolored_eyes, heterochromia, ,twin_drills ringlets, long_hair, closed mouth, tears, wavy mouth, tearing up, pout, meme, parody, frown, v-shaped eyebrows, crying, sad, crying with eyes open, puffy cheeks", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 2.8, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, unprofessional anatomy, unprofessional fingers", "checkpoint_model": "MeinaAlter.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-01-29T03:12:46.083915Z", "created_at": "2024-01-29T03:12:01.671546Z", "data_removed": false, "error": null, "id": "elyextbbrmmcc4ehtzgtlvlp7a", "input": { "cfg": 7, "lora": "gaming/Soul_Fighter_Gwen.safetensors", "steps": 20, "width": 360, "height": 540, "batch_size": 3, "input_prompt": "(detailed arena background), masterpiece, best quality, Soul_Fighter_Gwen, multicolored_eyes, heterochromia, ,twin_drills ringlets, long_hair, closed mouth, tears, wavy mouth, tearing up, pout, meme, parody, frown, v-shaped eyebrows, crying, sad, crying with eyes open, puffy cheeks", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 2.8, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, unprofessional anatomy, unprofessional fingers", "checkpoint_model": "MeinaAlter.safetensors" }, "logs": "Using seed: 655891\ngot prompt\nmodel_type EPS\nadm 0\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nmissing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}\nleft over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])\nRequested to load SD1ClipModel\nLoading 1 new model\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/20 [00:00<?, ?it/s]\n 10%|█ | 2/20 [00:00<00:01, 12.27it/s]\n 20%|██ | 4/20 [00:00<00:01, 13.14it/s]\n 30%|███ | 6/20 [00:00<00:01, 13.48it/s]\n 40%|████ | 8/20 [00:00<00:00, 13.50it/s]\n 50%|█████ | 10/20 [00:00<00:00, 13.74it/s]\n 60%|██████ | 12/20 [00:00<00:00, 13.96it/s]\n 70%|███████ | 14/20 [00:01<00:00, 14.12it/s]\n 80%|████████ | 16/20 [00:01<00:00, 14.03it/s]\n 90%|█████████ | 18/20 [00:01<00:00, 13.96it/s]\n100%|██████████| 20/20 [00:01<00:00, 13.91it/s]\n100%|██████████| 20/20 [00:01<00:00, 13.77it/s]\nIterativeLatentUpscale[1/3]: 576.0x857.6 (scale:1.60) \n 0%| | 0/20 [00:00<?, ?it/s]\n 10%|█ | 2/20 [00:00<00:01, 10.05it/s]\n 20%|██ | 4/20 [00:00<00:01, 8.58it/s]\n 25%|██▌ | 5/20 [00:00<00:01, 8.11it/s]\n 30%|███ | 6/20 [00:00<00:01, 8.18it/s]\n 35%|███▌ | 7/20 [00:00<00:01, 8.05it/s]\n 40%|████ | 8/20 [00:00<00:01, 7.93it/s]\n 45%|████▌ | 9/20 [00:01<00:01, 7.87it/s]\n 50%|█████ | 10/20 [00:01<00:01, 7.85it/s]\n 55%|█████▌ | 11/20 [00:01<00:01, 7.82it/s]\n 60%|██████ | 12/20 [00:01<00:01, 7.82it/s]\n 65%|██████▌ | 13/20 [00:01<00:00, 7.66it/s]\n 70%|███████ | 14/20 [00:01<00:00, 7.84it/s]\n 75%|███████▌ | 15/20 [00:01<00:00, 7.81it/s]\n 80%|████████ | 16/20 [00:02<00:00, 7.80it/s]\n 85%|████████▌ | 17/20 [00:02<00:00, 7.72it/s]\n 90%|█████████ | 18/20 [00:02<00:00, 7.77it/s]\n 95%|█████████▌| 19/20 [00:02<00:00, 7.75it/s]\n100%|██████████| 20/20 [00:02<00:00, 7.75it/s]\n100%|██████████| 20/20 [00:02<00:00, 7.93it/s]\nIterativeLatentUpscale[2/3]: 792.0x1179.2 (scale:2.20)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:01, 9.98it/s]\n 10%|█ | 2/20 [00:00<00:04, 4.11it/s]\n 15%|█▌ | 3/20 [00:00<00:04, 3.46it/s]\n 20%|██ | 4/20 [00:01<00:04, 3.22it/s]\n 25%|██▌ | 5/20 [00:01<00:04, 3.10it/s]\n 30%|███ | 6/20 [00:01<00:04, 3.04it/s]\n 35%|███▌ | 7/20 [00:02<00:04, 3.00it/s]\n 40%|████ | 8/20 [00:02<00:04, 2.97it/s]\n 45%|████▌ | 9/20 [00:02<00:03, 2.95it/s]\n 50%|█████ | 10/20 [00:03<00:03, 2.94it/s]\n 55%|█████▌ | 11/20 [00:03<00:03, 2.93it/s]\n 60%|██████ | 12/20 [00:03<00:02, 2.93it/s]\n 65%|██████▌ | 13/20 [00:04<00:02, 2.93it/s]\n 70%|███████ | 14/20 [00:04<00:02, 2.92it/s]\n 75%|███████▌ | 15/20 [00:04<00:01, 2.92it/s]\n 80%|████████ | 16/20 [00:05<00:01, 2.92it/s]\n 85%|████████▌ | 17/20 [00:05<00:01, 2.92it/s]\n 90%|█████████ | 18/20 [00:05<00:00, 2.92it/s]\n 95%|█████████▌| 19/20 [00:06<00:00, 2.91it/s]\n100%|██████████| 20/20 [00:06<00:00, 2.92it/s]\n100%|██████████| 20/20 [00:06<00:00, 3.02it/s]\nIterativeLatentUpscale[Final]: 1008.0x1500.8 (scale:2.80) \n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:03, 4.85it/s]\n 10%|█ | 2/20 [00:00<00:08, 2.02it/s]\n 15%|█▌ | 3/20 [00:01<00:09, 1.70it/s]\n 20%|██ | 4/20 [00:02<00:10, 1.58it/s]\n 25%|██▌ | 5/20 [00:02<00:09, 1.53it/s]\n 30%|███ | 6/20 [00:03<00:09, 1.49it/s]\n 35%|███▌ | 7/20 [00:04<00:08, 1.47it/s]\n 40%|████ | 8/20 [00:05<00:08, 1.46it/s]\n 45%|████▌ | 9/20 [00:05<00:07, 1.45it/s]\n 50%|█████ | 10/20 [00:06<00:06, 1.44it/s]\n 55%|█████▌ | 11/20 [00:07<00:06, 1.44it/s]\n 60%|██████ | 12/20 [00:07<00:05, 1.44it/s]\n 65%|██████▌ | 13/20 [00:08<00:04, 1.43it/s]\n 70%|███████ | 14/20 [00:09<00:04, 1.43it/s]\n 75%|███████▌ | 15/20 [00:09<00:03, 1.43it/s]\n 80%|████████ | 16/20 [00:10<00:02, 1.43it/s]\n 85%|████████▌ | 17/20 [00:11<00:02, 1.43it/s]\n 90%|█████████ | 18/20 [00:12<00:01, 1.43it/s]\n 95%|█████████▌| 19/20 [00:12<00:00, 1.43it/s]\n100%|██████████| 20/20 [00:13<00:00, 1.43it/s]\n100%|██████████| 20/20 [00:13<00:00, 1.48it/s]\nPrompt executed in 39.52 seconds\nnode output: {'images': [{'filename': 'ComfyUI_temp_aydoz_00052_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_aydoz_00053_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_aydoz_00054_.png', 'subfolder': '', 'type': 'temp'}]}\ntemp\ntemp\ntemp\nnode output: {'images': [{'filename': 'ComfyUI_00096_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00097_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00098_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\noutput\n3 images generated successfully", "metrics": { "predict_time": 44.399743, "total_time": 44.412369 }, "output": [ "https://replicate.delivery/pbxt/jnhXELeeH7q8TUZL0y2LfYb4xO3J9tPNA34XiPeYn7zsayEJB/out-0.png", "https://replicate.delivery/pbxt/lMbg6FMSIpIbDVWkIVzfqwVqxwl1ufFcf5pK3kG6FB5YNZikA/out-1.png", "https://replicate.delivery/pbxt/gFX1kAUfSp0CRyKpPYbZQpKTAO1rcwTTjNhA7ceEGfmbNZikA/out-2.png" ], "started_at": "2024-01-29T03:12:01.684172Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/elyextbbrmmcc4ehtzgtlvlp7a", "cancel": "https://api.replicate.com/v1/predictions/elyextbbrmmcc4ehtzgtlvlp7a/cancel" }, "version": "cc8bd44a75878323db1fda6179d3f89001898507b62fb6d4365e164687c910f1" }
Generated inUsing seed: 655891 got prompt model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Requested to load SD1ClipModel Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s] 10%|█ | 2/20 [00:00<00:01, 12.27it/s] 20%|██ | 4/20 [00:00<00:01, 13.14it/s] 30%|███ | 6/20 [00:00<00:01, 13.48it/s] 40%|████ | 8/20 [00:00<00:00, 13.50it/s] 50%|█████ | 10/20 [00:00<00:00, 13.74it/s] 60%|██████ | 12/20 [00:00<00:00, 13.96it/s] 70%|███████ | 14/20 [00:01<00:00, 14.12it/s] 80%|████████ | 16/20 [00:01<00:00, 14.03it/s] 90%|█████████ | 18/20 [00:01<00:00, 13.96it/s] 100%|██████████| 20/20 [00:01<00:00, 13.91it/s] 100%|██████████| 20/20 [00:01<00:00, 13.77it/s] IterativeLatentUpscale[1/3]: 576.0x857.6 (scale:1.60) 0%| | 0/20 [00:00<?, ?it/s] 10%|█ | 2/20 [00:00<00:01, 10.05it/s] 20%|██ | 4/20 [00:00<00:01, 8.58it/s] 25%|██▌ | 5/20 [00:00<00:01, 8.11it/s] 30%|███ | 6/20 [00:00<00:01, 8.18it/s] 35%|███▌ | 7/20 [00:00<00:01, 8.05it/s] 40%|████ | 8/20 [00:00<00:01, 7.93it/s] 45%|████▌ | 9/20 [00:01<00:01, 7.87it/s] 50%|█████ | 10/20 [00:01<00:01, 7.85it/s] 55%|█████▌ | 11/20 [00:01<00:01, 7.82it/s] 60%|██████ | 12/20 [00:01<00:01, 7.82it/s] 65%|██████▌ | 13/20 [00:01<00:00, 7.66it/s] 70%|███████ | 14/20 [00:01<00:00, 7.84it/s] 75%|███████▌ | 15/20 [00:01<00:00, 7.81it/s] 80%|████████ | 16/20 [00:02<00:00, 7.80it/s] 85%|████████▌ | 17/20 [00:02<00:00, 7.72it/s] 90%|█████████ | 18/20 [00:02<00:00, 7.77it/s] 95%|█████████▌| 19/20 [00:02<00:00, 7.75it/s] 100%|██████████| 20/20 [00:02<00:00, 7.75it/s] 100%|██████████| 20/20 [00:02<00:00, 7.93it/s] IterativeLatentUpscale[2/3]: 792.0x1179.2 (scale:2.20) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:01, 9.98it/s] 10%|█ | 2/20 [00:00<00:04, 4.11it/s] 15%|█▌ | 3/20 [00:00<00:04, 3.46it/s] 20%|██ | 4/20 [00:01<00:04, 3.22it/s] 25%|██▌ | 5/20 [00:01<00:04, 3.10it/s] 30%|███ | 6/20 [00:01<00:04, 3.04it/s] 35%|███▌ | 7/20 [00:02<00:04, 3.00it/s] 40%|████ | 8/20 [00:02<00:04, 2.97it/s] 45%|████▌ | 9/20 [00:02<00:03, 2.95it/s] 50%|█████ | 10/20 [00:03<00:03, 2.94it/s] 55%|█████▌ | 11/20 [00:03<00:03, 2.93it/s] 60%|██████ | 12/20 [00:03<00:02, 2.93it/s] 65%|██████▌ | 13/20 [00:04<00:02, 2.93it/s] 70%|███████ | 14/20 [00:04<00:02, 2.92it/s] 75%|███████▌ | 15/20 [00:04<00:01, 2.92it/s] 80%|████████ | 16/20 [00:05<00:01, 2.92it/s] 85%|████████▌ | 17/20 [00:05<00:01, 2.92it/s] 90%|█████████ | 18/20 [00:05<00:00, 2.92it/s] 95%|█████████▌| 19/20 [00:06<00:00, 2.91it/s] 100%|██████████| 20/20 [00:06<00:00, 2.92it/s] 100%|██████████| 20/20 [00:06<00:00, 3.02it/s] IterativeLatentUpscale[Final]: 1008.0x1500.8 (scale:2.80) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:03, 4.85it/s] 10%|█ | 2/20 [00:00<00:08, 2.02it/s] 15%|█▌ | 3/20 [00:01<00:09, 1.70it/s] 20%|██ | 4/20 [00:02<00:10, 1.58it/s] 25%|██▌ | 5/20 [00:02<00:09, 1.53it/s] 30%|███ | 6/20 [00:03<00:09, 1.49it/s] 35%|███▌ | 7/20 [00:04<00:08, 1.47it/s] 40%|████ | 8/20 [00:05<00:08, 1.46it/s] 45%|████▌ | 9/20 [00:05<00:07, 1.45it/s] 50%|█████ | 10/20 [00:06<00:06, 1.44it/s] 55%|█████▌ | 11/20 [00:07<00:06, 1.44it/s] 60%|██████ | 12/20 [00:07<00:05, 1.44it/s] 65%|██████▌ | 13/20 [00:08<00:04, 1.43it/s] 70%|███████ | 14/20 [00:09<00:04, 1.43it/s] 75%|███████▌ | 15/20 [00:09<00:03, 1.43it/s] 80%|████████ | 16/20 [00:10<00:02, 1.43it/s] 85%|████████▌ | 17/20 [00:11<00:02, 1.43it/s] 90%|█████████ | 18/20 [00:12<00:01, 1.43it/s] 95%|█████████▌| 19/20 [00:12<00:00, 1.43it/s] 100%|██████████| 20/20 [00:13<00:00, 1.43it/s] 100%|██████████| 20/20 [00:13<00:00, 1.48it/s] Prompt executed in 39.52 seconds node output: {'images': [{'filename': 'ComfyUI_temp_aydoz_00052_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_aydoz_00053_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_aydoz_00054_.png', 'subfolder': '', 'type': 'temp'}]} temp temp temp node output: {'images': [{'filename': 'ComfyUI_00096_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00097_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00098_.png', 'subfolder': '', 'type': 'output'}]} output output output 3 images generated successfully
Prediction
bryantanjw/entropy-lol:517d6173f497984999e7a137ed43ef7bd4be6090f560fa4dd6372056abf55359IDittu66zbvy3prqfqngjmtykvziStatusSucceededSourceWebHardwareA100 (40GB)Total durationCreatedInput
- cfg
- 7
- lora
- gaming/Ahri.safetensors
- seed
- 15194649
- steps
- 20
- width
- 360
- height
- 640
- batch_size
- 2
- input_prompt
- photorealistic:1.4, best quality, realistic, masterpiece, high quality, UHD, shadow, taken by Canon EOS, SIGMA Art Lens 35mm F1.4, ISO 200 Shutter Speed 2000, light blue hair, pink hair, t-shirt with floral print, jeans shorts, casual fashion, (hands in pocket), blooming all around, (limited palette), colourful, bright colors, pink bag
- sampler_name
- euler_ancestral
- lora_strength
- 0
- upscale_factor
- 3
- negative_prompt
- (worst quality:1.4), (low quality:1.4), simple background, bad anatomy
- checkpoint_model
- DarkSushi.safetensors
{ "cfg": 7, "lora": "gaming/Ahri.safetensors", "seed": 15194649, "steps": 20, "width": 360, "height": 640, "batch_size": 2, "input_prompt": "photorealistic:1.4, best quality, realistic, masterpiece, high quality, UHD, shadow, taken by Canon EOS, SIGMA Art Lens 35mm F1.4, ISO 200 Shutter Speed 2000, light blue hair, pink hair, t-shirt with floral print, jeans shorts, casual fashion, (hands in pocket), blooming all around, (limited palette), colourful, bright colors, pink bag", "sampler_name": "euler_ancestral", "lora_strength": 0, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "DarkSushi.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:517d6173f497984999e7a137ed43ef7bd4be6090f560fa4dd6372056abf55359", { input: { cfg: 7, lora: "gaming/Ahri.safetensors", seed: 15194649, steps: 20, width: 360, height: 640, batch_size: 2, input_prompt: "photorealistic:1.4, best quality, realistic, masterpiece, high quality, UHD, shadow, taken by Canon EOS, SIGMA Art Lens 35mm F1.4, ISO 200 Shutter Speed 2000, light blue hair, pink hair, t-shirt with floral print, jeans shorts, casual fashion, (hands in pocket), blooming all around, (limited palette), colourful, bright colors, pink bag", sampler_name: "euler_ancestral", lora_strength: 0, upscale_factor: 3, negative_prompt: "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", checkpoint_model: "DarkSushi.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:517d6173f497984999e7a137ed43ef7bd4be6090f560fa4dd6372056abf55359", input={ "cfg": 7, "lora": "gaming/Ahri.safetensors", "seed": 15194649, "steps": 20, "width": 360, "height": 640, "batch_size": 2, "input_prompt": "photorealistic:1.4, best quality, realistic, masterpiece, high quality, UHD, shadow, taken by Canon EOS, SIGMA Art Lens 35mm F1.4, ISO 200 Shutter Speed 2000, light blue hair, pink hair, t-shirt with floral print, jeans shorts, casual fashion, (hands in pocket), blooming all around, (limited palette), colourful, bright colors, pink bag", "sampler_name": "euler_ancestral", "lora_strength": 0, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "DarkSushi.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:517d6173f497984999e7a137ed43ef7bd4be6090f560fa4dd6372056abf55359", "input": { "cfg": 7, "lora": "gaming/Ahri.safetensors", "seed": 15194649, "steps": 20, "width": 360, "height": 640, "batch_size": 2, "input_prompt": "photorealistic:1.4, best quality, realistic, masterpiece, high quality, UHD, shadow, taken by Canon EOS, SIGMA Art Lens 35mm F1.4, ISO 200 Shutter Speed 2000, light blue hair, pink hair, t-shirt with floral print, jeans shorts, casual fashion, (hands in pocket), blooming all around, (limited palette), colourful, bright colors, pink bag", "sampler_name": "euler_ancestral", "lora_strength": 0, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "DarkSushi.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-02-12T07:25:34.111229Z", "created_at": "2024-02-12T07:24:53.862074Z", "data_removed": false, "error": null, "id": "ittu66zbvy3prqfqngjmtykvzi", "input": { "cfg": 7, "lora": "gaming/Ahri.safetensors", "seed": 15194649, "steps": 20, "width": 360, "height": 640, "batch_size": 2, "input_prompt": "photorealistic:1.4, best quality, realistic, masterpiece, high quality, UHD, shadow, taken by Canon EOS, SIGMA Art Lens 35mm F1.4, ISO 200 Shutter Speed 2000, light blue hair, pink hair, t-shirt with floral print, jeans shorts, casual fashion, (hands in pocket), blooming all around, (limited palette), colourful, bright colors, pink bag", "sampler_name": "euler_ancestral", "lora_strength": 0, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, bad anatomy", "checkpoint_model": "DarkSushi.safetensors" }, "logs": "Using seed: 15194649\nNow downloading checkpoints model: DarkSushi.safetensors\nModel ComfyUI/models/checkpoints/DarkSushi.safetensors already exists, skipping download\nUpscale model ComfyUI/models/upscale_models/RealESRGAN_x4plus.pth already exists, skipping download\nNow downloading loras model: gaming/Ahri.safetensors\nModel ComfyUI/models/loras/Ahri.safetensors already exists, skipping download\nUpscale model ComfyUI/models/upscale_models/RealESRGAN_x4plus.pth already exists, skipping download\ngot prompt\n 0%| | 0/20 [00:00<?, ?it/s]\n 10%|█ | 2/20 [00:00<00:01, 12.62it/s]\n 20%|██ | 4/20 [00:00<00:01, 12.04it/s]\n 30%|███ | 6/20 [00:00<00:01, 12.77it/s]\n 40%|████ | 8/20 [00:00<00:00, 13.01it/s]\n 50%|█████ | 10/20 [00:00<00:00, 13.28it/s]\n 60%|██████ | 12/20 [00:00<00:00, 13.42it/s]\n 70%|███████ | 14/20 [00:01<00:00, 13.42it/s]\n 80%|████████ | 16/20 [00:01<00:00, 13.50it/s]\n 90%|█████████ | 18/20 [00:01<00:00, 12.81it/s]\n100%|██████████| 20/20 [00:01<00:00, 13.10it/s]\n100%|██████████| 20/20 [00:01<00:00, 13.07it/s]\nIterativeLatentUpscale[1/3]: 600.0x1066.7 (scale:1.67) \n 0%| | 0/20 [00:00<?, ?it/s]\n 10%|█ | 2/20 [00:00<00:01, 9.23it/s]\n 15%|█▌ | 3/20 [00:00<00:02, 8.15it/s]\n 20%|██ | 4/20 [00:00<00:02, 7.69it/s]\n 25%|██▌ | 5/20 [00:00<00:02, 7.44it/s]\n 30%|███ | 6/20 [00:00<00:01, 7.28it/s]\n 35%|███▌ | 7/20 [00:00<00:01, 7.20it/s]\n 40%|████ | 8/20 [00:01<00:01, 7.11it/s]\n 45%|████▌ | 9/20 [00:01<00:01, 7.10it/s]\n 50%|█████ | 10/20 [00:01<00:01, 7.07it/s]\n 55%|█████▌ | 11/20 [00:01<00:01, 7.05it/s]\n 60%|██████ | 12/20 [00:01<00:01, 7.04it/s]\n 65%|██████▌ | 13/20 [00:01<00:00, 7.03it/s]\n 70%|███████ | 14/20 [00:01<00:00, 7.02it/s]\n 75%|███████▌ | 15/20 [00:02<00:00, 7.03it/s]\n 80%|████████ | 16/20 [00:02<00:00, 7.03it/s]\n 85%|████████▌ | 17/20 [00:02<00:00, 7.02it/s]\n 90%|█████████ | 18/20 [00:02<00:00, 7.01it/s]\n 95%|█████████▌| 19/20 [00:02<00:00, 6.83it/s]\n100%|██████████| 20/20 [00:02<00:00, 7.09it/s]\n100%|██████████| 20/20 [00:02<00:00, 7.18it/s]\nIterativeLatentUpscale[2/3]: 840.0x1493.3 (scale:2.33)\n 0%| | 0/20 [00:00<?, ?it/s]\n 10%|█ | 2/20 [00:00<00:03, 4.74it/s]\n 15%|█▌ | 3/20 [00:00<00:04, 3.89it/s]\n 20%|██ | 4/20 [00:01<00:04, 3.55it/s]\n 25%|██▌ | 5/20 [00:01<00:04, 3.38it/s]\n 30%|███ | 6/20 [00:01<00:04, 3.28it/s]\n 35%|███▌ | 7/20 [00:02<00:04, 3.22it/s]\n 40%|████ | 8/20 [00:02<00:03, 3.18it/s]\n 45%|████▌ | 9/20 [00:02<00:03, 3.16it/s]\n 50%|█████ | 10/20 [00:03<00:03, 3.14it/s]\n 55%|█████▌ | 11/20 [00:03<00:02, 3.13it/s]\n 60%|██████ | 12/20 [00:03<00:02, 3.12it/s]\n 65%|██████▌ | 13/20 [00:03<00:02, 3.06it/s]\n 70%|███████ | 14/20 [00:04<00:01, 3.07it/s]\n 75%|███████▌ | 15/20 [00:04<00:01, 3.08it/s]\n 80%|████████ | 16/20 [00:04<00:01, 3.06it/s]\n 85%|████████▌ | 17/20 [00:05<00:00, 3.09it/s]\n 90%|█████████ | 18/20 [00:05<00:00, 3.09it/s]\n 95%|█████████▌| 19/20 [00:05<00:00, 3.09it/s]\n100%|██████████| 20/20 [00:06<00:00, 3.09it/s]\n100%|██████████| 20/20 [00:06<00:00, 3.20it/s]\nIterativeLatentUpscale[Final]: 1080.0x1920.0 (scale:3.00) \n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:04, 4.25it/s]\n 10%|█ | 2/20 [00:00<00:09, 1.86it/s]\n 15%|█▌ | 3/20 [00:01<00:10, 1.57it/s]\n 20%|██ | 4/20 [00:02<00:10, 1.46it/s]\n 25%|██▌ | 5/20 [00:03<00:10, 1.41it/s]\n 30%|███ | 6/20 [00:03<00:10, 1.38it/s]\n 35%|███▌ | 7/20 [00:04<00:09, 1.37it/s]\n 40%|████ | 8/20 [00:05<00:08, 1.35it/s]\n 45%|████▌ | 9/20 [00:06<00:08, 1.34it/s]\n 50%|█████ | 10/20 [00:07<00:07, 1.34it/s]\n 55%|█████▌ | 11/20 [00:07<00:06, 1.34it/s]\n 60%|██████ | 12/20 [00:08<00:05, 1.33it/s]\n 65%|██████▌ | 13/20 [00:09<00:05, 1.33it/s]\n 70%|███████ | 14/20 [00:10<00:04, 1.33it/s]\n 75%|███████▌ | 15/20 [00:10<00:03, 1.33it/s]\n 80%|████████ | 16/20 [00:11<00:03, 1.33it/s]\n 85%|████████▌ | 17/20 [00:12<00:02, 1.33it/s]\n 90%|█████████ | 18/20 [00:13<00:01, 1.33it/s]\n 95%|█████████▌| 19/20 [00:13<00:00, 1.33it/s]\n100%|██████████| 20/20 [00:14<00:00, 1.33it/s]\n100%|██████████| 20/20 [00:14<00:00, 1.38it/s]\nPrompt executed in 37.25 seconds\nnode output: {'images': [{'filename': 'ComfyUI_temp_fmizq_00006_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_fmizq_00007_.png', 'subfolder': '', 'type': 'temp'}]}\ntemp\ntemp\nnode output: {'images': [{'filename': 'ComfyUI_00050_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00051_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\n2 images generated successfully", "metrics": { "predict_time": 40.232048, "total_time": 40.249155 }, "output": [ "https://replicate.delivery/pbxt/2lECtnJ2EmKCMlDwGOLVtlIhdWEfqglj4cVmHQF0g0v2z7KJA/out-0.png", "https://replicate.delivery/pbxt/vzLf1Akc2aQaKCfUncKinTrtmAqbUg9tpunw9LcG29ktn3VSA/out-1.png" ], "started_at": "2024-02-12T07:24:53.879181Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/ittu66zbvy3prqfqngjmtykvzi", "cancel": "https://api.replicate.com/v1/predictions/ittu66zbvy3prqfqngjmtykvzi/cancel" }, "version": "517d6173f497984999e7a137ed43ef7bd4be6090f560fa4dd6372056abf55359" }
Generated inUsing seed: 15194649 Now downloading checkpoints model: DarkSushi.safetensors Model ComfyUI/models/checkpoints/DarkSushi.safetensors already exists, skipping download Upscale model ComfyUI/models/upscale_models/RealESRGAN_x4plus.pth already exists, skipping download Now downloading loras model: gaming/Ahri.safetensors Model ComfyUI/models/loras/Ahri.safetensors already exists, skipping download Upscale model ComfyUI/models/upscale_models/RealESRGAN_x4plus.pth already exists, skipping download got prompt 0%| | 0/20 [00:00<?, ?it/s] 10%|█ | 2/20 [00:00<00:01, 12.62it/s] 20%|██ | 4/20 [00:00<00:01, 12.04it/s] 30%|███ | 6/20 [00:00<00:01, 12.77it/s] 40%|████ | 8/20 [00:00<00:00, 13.01it/s] 50%|█████ | 10/20 [00:00<00:00, 13.28it/s] 60%|██████ | 12/20 [00:00<00:00, 13.42it/s] 70%|███████ | 14/20 [00:01<00:00, 13.42it/s] 80%|████████ | 16/20 [00:01<00:00, 13.50it/s] 90%|█████████ | 18/20 [00:01<00:00, 12.81it/s] 100%|██████████| 20/20 [00:01<00:00, 13.10it/s] 100%|██████████| 20/20 [00:01<00:00, 13.07it/s] IterativeLatentUpscale[1/3]: 600.0x1066.7 (scale:1.67) 0%| | 0/20 [00:00<?, ?it/s] 10%|█ | 2/20 [00:00<00:01, 9.23it/s] 15%|█▌ | 3/20 [00:00<00:02, 8.15it/s] 20%|██ | 4/20 [00:00<00:02, 7.69it/s] 25%|██▌ | 5/20 [00:00<00:02, 7.44it/s] 30%|███ | 6/20 [00:00<00:01, 7.28it/s] 35%|███▌ | 7/20 [00:00<00:01, 7.20it/s] 40%|████ | 8/20 [00:01<00:01, 7.11it/s] 45%|████▌ | 9/20 [00:01<00:01, 7.10it/s] 50%|█████ | 10/20 [00:01<00:01, 7.07it/s] 55%|█████▌ | 11/20 [00:01<00:01, 7.05it/s] 60%|██████ | 12/20 [00:01<00:01, 7.04it/s] 65%|██████▌ | 13/20 [00:01<00:00, 7.03it/s] 70%|███████ | 14/20 [00:01<00:00, 7.02it/s] 75%|███████▌ | 15/20 [00:02<00:00, 7.03it/s] 80%|████████ | 16/20 [00:02<00:00, 7.03it/s] 85%|████████▌ | 17/20 [00:02<00:00, 7.02it/s] 90%|█████████ | 18/20 [00:02<00:00, 7.01it/s] 95%|█████████▌| 19/20 [00:02<00:00, 6.83it/s] 100%|██████████| 20/20 [00:02<00:00, 7.09it/s] 100%|██████████| 20/20 [00:02<00:00, 7.18it/s] IterativeLatentUpscale[2/3]: 840.0x1493.3 (scale:2.33) 0%| | 0/20 [00:00<?, ?it/s] 10%|█ | 2/20 [00:00<00:03, 4.74it/s] 15%|█▌ | 3/20 [00:00<00:04, 3.89it/s] 20%|██ | 4/20 [00:01<00:04, 3.55it/s] 25%|██▌ | 5/20 [00:01<00:04, 3.38it/s] 30%|███ | 6/20 [00:01<00:04, 3.28it/s] 35%|███▌ | 7/20 [00:02<00:04, 3.22it/s] 40%|████ | 8/20 [00:02<00:03, 3.18it/s] 45%|████▌ | 9/20 [00:02<00:03, 3.16it/s] 50%|█████ | 10/20 [00:03<00:03, 3.14it/s] 55%|█████▌ | 11/20 [00:03<00:02, 3.13it/s] 60%|██████ | 12/20 [00:03<00:02, 3.12it/s] 65%|██████▌ | 13/20 [00:03<00:02, 3.06it/s] 70%|███████ | 14/20 [00:04<00:01, 3.07it/s] 75%|███████▌ | 15/20 [00:04<00:01, 3.08it/s] 80%|████████ | 16/20 [00:04<00:01, 3.06it/s] 85%|████████▌ | 17/20 [00:05<00:00, 3.09it/s] 90%|█████████ | 18/20 [00:05<00:00, 3.09it/s] 95%|█████████▌| 19/20 [00:05<00:00, 3.09it/s] 100%|██████████| 20/20 [00:06<00:00, 3.09it/s] 100%|██████████| 20/20 [00:06<00:00, 3.20it/s] IterativeLatentUpscale[Final]: 1080.0x1920.0 (scale:3.00) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:04, 4.25it/s] 10%|█ | 2/20 [00:00<00:09, 1.86it/s] 15%|█▌ | 3/20 [00:01<00:10, 1.57it/s] 20%|██ | 4/20 [00:02<00:10, 1.46it/s] 25%|██▌ | 5/20 [00:03<00:10, 1.41it/s] 30%|███ | 6/20 [00:03<00:10, 1.38it/s] 35%|███▌ | 7/20 [00:04<00:09, 1.37it/s] 40%|████ | 8/20 [00:05<00:08, 1.35it/s] 45%|████▌ | 9/20 [00:06<00:08, 1.34it/s] 50%|█████ | 10/20 [00:07<00:07, 1.34it/s] 55%|█████▌ | 11/20 [00:07<00:06, 1.34it/s] 60%|██████ | 12/20 [00:08<00:05, 1.33it/s] 65%|██████▌ | 13/20 [00:09<00:05, 1.33it/s] 70%|███████ | 14/20 [00:10<00:04, 1.33it/s] 75%|███████▌ | 15/20 [00:10<00:03, 1.33it/s] 80%|████████ | 16/20 [00:11<00:03, 1.33it/s] 85%|████████▌ | 17/20 [00:12<00:02, 1.33it/s] 90%|█████████ | 18/20 [00:13<00:01, 1.33it/s] 95%|█████████▌| 19/20 [00:13<00:00, 1.33it/s] 100%|██████████| 20/20 [00:14<00:00, 1.33it/s] 100%|██████████| 20/20 [00:14<00:00, 1.38it/s] Prompt executed in 37.25 seconds node output: {'images': [{'filename': 'ComfyUI_temp_fmizq_00006_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_fmizq_00007_.png', 'subfolder': '', 'type': 'temp'}]} temp temp node output: {'images': [{'filename': 'ComfyUI_00050_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00051_.png', 'subfolder': '', 'type': 'output'}]} output output 2 images generated successfully
Prediction
bryantanjw/entropy-lol:517d6173f497984999e7a137ed43ef7bd4be6090f560fa4dd6372056abf55359IDv5ovtwdbb4hbab3tw4msqu3nqqStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- cfg
- 8
- lora
- gaming/Jett.safetensors
- seed
- 0
- steps
- 20
- width
- 340
- height
- 512
- batch_size
- 3
- custom_lora
- input_prompt
- jett, white hair, black gloves, shoulder pads, black pants, cropped sleeveless jacket, feminine
- sampler_name
- dpmpp_2m
- lora_strength
- 1
- upscale_factor
- 3
- negative_prompt
- (worst quality:1.4), (low quality:1.4)
- checkpoint_model
- MajicMixReverie.safetensors
{ "cfg": 8, "lora": "gaming/Jett.safetensors", "seed": 0, "steps": 20, "width": 340, "height": 512, "batch_size": 3, "custom_lora": "", "input_prompt": "jett, white hair, black gloves, shoulder pads, black pants, cropped sleeveless jacket, feminine", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "MajicMixReverie.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:517d6173f497984999e7a137ed43ef7bd4be6090f560fa4dd6372056abf55359", { input: { cfg: 8, lora: "gaming/Jett.safetensors", seed: 0, steps: 20, width: 340, height: 512, batch_size: 3, custom_lora: "", input_prompt: "jett, white hair, black gloves, shoulder pads, black pants, cropped sleeveless jacket, feminine", sampler_name: "dpmpp_2m", lora_strength: 1, upscale_factor: 3, negative_prompt: "(worst quality:1.4), (low quality:1.4)", checkpoint_model: "MajicMixReverie.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:517d6173f497984999e7a137ed43ef7bd4be6090f560fa4dd6372056abf55359", input={ "cfg": 8, "lora": "gaming/Jett.safetensors", "seed": 0, "steps": 20, "width": 340, "height": 512, "batch_size": 3, "custom_lora": "", "input_prompt": "jett, white hair, black gloves, shoulder pads, black pants, cropped sleeveless jacket, feminine", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "MajicMixReverie.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:517d6173f497984999e7a137ed43ef7bd4be6090f560fa4dd6372056abf55359", "input": { "cfg": 8, "lora": "gaming/Jett.safetensors", "seed": 0, "steps": 20, "width": 340, "height": 512, "batch_size": 3, "custom_lora": "", "input_prompt": "jett, white hair, black gloves, shoulder pads, black pants, cropped sleeveless jacket, feminine", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "MajicMixReverie.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-02-09T19:07:46.047043Z", "created_at": "2024-02-09T19:04:17.324407Z", "data_removed": false, "error": null, "id": "v5ovtwdbb4hbab3tw4msqu3nqq", "input": { "cfg": 8, "lora": "gaming/Jett.safetensors", "seed": 0, "steps": 20, "width": 340, "height": 512, "batch_size": 3, "custom_lora": "", "input_prompt": "jett, white hair, black gloves, shoulder pads, black pants, cropped sleeveless jacket, feminine", "sampler_name": "dpmpp_2m", "lora_strength": 1, "upscale_factor": 3, "negative_prompt": "(worst quality:1.4), (low quality:1.4)", "checkpoint_model": "MajicMixReverie.safetensors" }, "logs": "Using seed: 3171158\nNow downloading checkpoints model: MajicMixReverie.safetensors\nModel ComfyUI/models/checkpoints/MajicMixReverie.safetensors not found, downloading checkpoint model\nDownloaded model to ComfyUI/models/checkpoints/MajicMixReverie.safetensors\nUpscale model not found, downloading upscale model\nDownloaded upscale model to ComfyUI/models/upscale_models/RealESRGAN_x4plus.pth\nNow downloading loras model: gaming/Jett.safetensors\nModel ComfyUI/models/loras/Jett.safetensors not found, downloading checkpoint model\nDownloaded model to ComfyUI/models/loras/Jett.safetensors\nUpscale model ComfyUI/models/upscale_models/RealESRGAN_x4plus.pth already exists, skipping download\ngot prompt\nmodel_type EPS\nadm 0\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nmissing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}\nleft over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nLeftover VAE keys ['model_ema.decay', 'model_ema.num_updates']\nRequested to load SD1ClipModel\nLoading 1 new model\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:10, 1.87it/s]\n 10%|█ | 2/20 [00:00<00:05, 3.52it/s]\n 20%|██ | 4/20 [00:00<00:02, 6.72it/s]\n 30%|███ | 6/20 [00:00<00:01, 9.01it/s]\n 40%|████ | 8/20 [00:01<00:01, 10.63it/s]\n 50%|█████ | 10/20 [00:01<00:00, 11.77it/s]\n 60%|██████ | 12/20 [00:01<00:00, 12.61it/s]\n 70%|███████ | 14/20 [00:01<00:00, 13.18it/s]\n 80%|████████ | 16/20 [00:01<00:00, 13.61it/s]\n 90%|█████████ | 18/20 [00:01<00:00, 13.85it/s]\n100%|██████████| 20/20 [00:01<00:00, 14.07it/s]\n100%|██████████| 20/20 [00:01<00:00, 10.63it/s]\nRequested to load AutoencoderKL\nLoading 1 new model\nIterativeLatentUpscale[1/3]: 560.0x853.3 (scale:1.67)\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:05, 3.36it/s]\n 15%|█▌ | 3/20 [00:00<00:02, 5.73it/s]\n 20%|██ | 4/20 [00:00<00:02, 5.49it/s]\n 25%|██▌ | 5/20 [00:00<00:02, 5.34it/s]\n 30%|███ | 6/20 [00:01<00:02, 5.25it/s]\n 35%|███▌ | 7/20 [00:01<00:02, 5.19it/s]\n 40%|████ | 8/20 [00:01<00:02, 5.14it/s]\n 45%|████▌ | 9/20 [00:01<00:02, 5.12it/s]\n 50%|█████ | 10/20 [00:01<00:01, 5.08it/s]\n 55%|█████▌ | 11/20 [00:02<00:01, 5.05it/s]\n 60%|██████ | 12/20 [00:02<00:01, 5.06it/s]\n 65%|██████▌ | 13/20 [00:02<00:01, 5.06it/s]\n 70%|███████ | 14/20 [00:02<00:01, 5.07it/s]\n 75%|███████▌ | 15/20 [00:02<00:00, 5.08it/s]\n 80%|████████ | 16/20 [00:03<00:00, 5.08it/s]\n 85%|████████▌ | 17/20 [00:03<00:00, 5.07it/s]\n 90%|█████████ | 18/20 [00:03<00:00, 3.74it/s]\n 95%|█████████▌| 19/20 [00:03<00:00, 4.00it/s]\n100%|██████████| 20/20 [00:04<00:00, 4.18it/s]\n100%|██████████| 20/20 [00:04<00:00, 4.78it/s]\nIterativeLatentUpscale[2/3]: 784.0x1194.7 (scale:2.33) \n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:10, 1.77it/s]\n 15%|█▌ | 3/20 [00:01<00:06, 2.59it/s]\n 20%|██ | 4/20 [00:01<00:07, 2.22it/s]\n 25%|██▌ | 5/20 [00:02<00:07, 2.05it/s]\n 30%|███ | 6/20 [00:02<00:07, 1.95it/s]\n 35%|███▌ | 7/20 [00:03<00:06, 1.89it/s]\n 40%|████ | 8/20 [00:04<00:06, 1.85it/s]\n 45%|████▌ | 9/20 [00:04<00:05, 1.84it/s]\n 50%|█████ | 10/20 [00:05<00:05, 1.83it/s]\n 55%|█████▌ | 11/20 [00:05<00:04, 1.83it/s]\n 60%|██████ | 12/20 [00:06<00:04, 1.82it/s]\n 65%|██████▌ | 13/20 [00:06<00:03, 1.82it/s]\n 70%|███████ | 14/20 [00:07<00:03, 1.81it/s]\n 75%|███████▌ | 15/20 [00:07<00:02, 1.80it/s]\n 80%|████████ | 16/20 [00:08<00:02, 1.78it/s]\n 85%|████████▌ | 17/20 [00:09<00:01, 1.79it/s]\n 90%|█████████ | 18/20 [00:09<00:01, 1.80it/s]\n 95%|█████████▌| 19/20 [00:10<00:00, 1.80it/s]\n100%|██████████| 20/20 [00:10<00:00, 1.68it/s]\n100%|██████████| 20/20 [00:10<00:00, 1.84it/s]\nIterativeLatentUpscale[Final]: 1008.0x1536.0 (scale:3.00) \n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:08, 2.25it/s]\n 10%|█ | 2/20 [00:01<00:18, 1.02s/it]\n 15%|█▌ | 3/20 [00:03<00:20, 1.20s/it]\n 20%|██ | 4/20 [00:04<00:20, 1.28s/it]\n 25%|██▌ | 5/20 [00:06<00:19, 1.33s/it]\n 30%|███ | 6/20 [00:07<00:19, 1.36s/it]\n 35%|███▌ | 7/20 [00:08<00:17, 1.38s/it]\n 40%|████ | 8/20 [00:10<00:16, 1.40s/it]\n 45%|████▌ | 9/20 [00:11<00:15, 1.40s/it]\n 50%|█████ | 10/20 [00:13<00:14, 1.41s/it]\n 55%|█████▌ | 11/20 [00:14<00:12, 1.42s/it]\n 60%|██████ | 12/20 [00:16<00:11, 1.43s/it]\n 65%|██████▌ | 13/20 [00:17<00:10, 1.44s/it]\n 70%|███████ | 14/20 [00:18<00:08, 1.43s/it]\n 75%|███████▌ | 15/20 [00:20<00:07, 1.43s/it]\n 80%|████████ | 16/20 [00:21<00:05, 1.43s/it]\n 85%|████████▌ | 17/20 [00:23<00:04, 1.44s/it]\n 90%|█████████ | 18/20 [00:24<00:02, 1.44s/it]\n 95%|█████████▌| 19/20 [00:26<00:01, 1.44s/it]\n100%|██████████| 20/20 [00:27<00:00, 1.44s/it]\n100%|██████████| 20/20 [00:27<00:00, 1.38s/it]\nPrompt executed in 69.08 seconds\nnode output: {'images': [{'filename': 'ComfyUI_temp_bmvsu_00001_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_bmvsu_00002_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_bmvsu_00003_.png', 'subfolder': '', 'type': 'temp'}]}\ntemp\ntemp\ntemp\nnode output: {'images': [{'filename': 'ComfyUI_00045_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00046_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00047_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\noutput\n3 images generated successfully", "metrics": { "predict_time": 89.119128, "total_time": 208.722636 }, "output": [ "https://replicate.delivery/pbxt/I81Tq5sfpYzuUSa9TiF1m7zFSaHxlRqc82D1Rv4VzU2fnCVSA/out-0.png", "https://replicate.delivery/pbxt/Jr75H42Cabb3Ol1uqcoZYfn4jh9sG2P4T6H4WWGV44JAUhKJA/out-1.png", "https://replicate.delivery/pbxt/MLMPhBqqIT45CxoyPTafssW1ZbqnfUVvUUlGYngCwp2BoCVSA/out-2.png" ], "started_at": "2024-02-09T19:06:16.927915Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/v5ovtwdbb4hbab3tw4msqu3nqq", "cancel": "https://api.replicate.com/v1/predictions/v5ovtwdbb4hbab3tw4msqu3nqq/cancel" }, "version": "517d6173f497984999e7a137ed43ef7bd4be6090f560fa4dd6372056abf55359" }
Generated inUsing seed: 3171158 Now downloading checkpoints model: MajicMixReverie.safetensors Model ComfyUI/models/checkpoints/MajicMixReverie.safetensors not found, downloading checkpoint model Downloaded model to ComfyUI/models/checkpoints/MajicMixReverie.safetensors Upscale model not found, downloading upscale model Downloaded upscale model to ComfyUI/models/upscale_models/RealESRGAN_x4plus.pth Now downloading loras model: gaming/Jett.safetensors Model ComfyUI/models/loras/Jett.safetensors not found, downloading checkpoint model Downloaded model to ComfyUI/models/loras/Jett.safetensors Upscale model ComfyUI/models/upscale_models/RealESRGAN_x4plus.pth already exists, skipping download got prompt model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE Leftover VAE keys ['model_ema.decay', 'model_ema.num_updates'] Requested to load SD1ClipModel Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:10, 1.87it/s] 10%|█ | 2/20 [00:00<00:05, 3.52it/s] 20%|██ | 4/20 [00:00<00:02, 6.72it/s] 30%|███ | 6/20 [00:00<00:01, 9.01it/s] 40%|████ | 8/20 [00:01<00:01, 10.63it/s] 50%|█████ | 10/20 [00:01<00:00, 11.77it/s] 60%|██████ | 12/20 [00:01<00:00, 12.61it/s] 70%|███████ | 14/20 [00:01<00:00, 13.18it/s] 80%|████████ | 16/20 [00:01<00:00, 13.61it/s] 90%|█████████ | 18/20 [00:01<00:00, 13.85it/s] 100%|██████████| 20/20 [00:01<00:00, 14.07it/s] 100%|██████████| 20/20 [00:01<00:00, 10.63it/s] Requested to load AutoencoderKL Loading 1 new model IterativeLatentUpscale[1/3]: 560.0x853.3 (scale:1.67) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:05, 3.36it/s] 15%|█▌ | 3/20 [00:00<00:02, 5.73it/s] 20%|██ | 4/20 [00:00<00:02, 5.49it/s] 25%|██▌ | 5/20 [00:00<00:02, 5.34it/s] 30%|███ | 6/20 [00:01<00:02, 5.25it/s] 35%|███▌ | 7/20 [00:01<00:02, 5.19it/s] 40%|████ | 8/20 [00:01<00:02, 5.14it/s] 45%|████▌ | 9/20 [00:01<00:02, 5.12it/s] 50%|█████ | 10/20 [00:01<00:01, 5.08it/s] 55%|█████▌ | 11/20 [00:02<00:01, 5.05it/s] 60%|██████ | 12/20 [00:02<00:01, 5.06it/s] 65%|██████▌ | 13/20 [00:02<00:01, 5.06it/s] 70%|███████ | 14/20 [00:02<00:01, 5.07it/s] 75%|███████▌ | 15/20 [00:02<00:00, 5.08it/s] 80%|████████ | 16/20 [00:03<00:00, 5.08it/s] 85%|████████▌ | 17/20 [00:03<00:00, 5.07it/s] 90%|█████████ | 18/20 [00:03<00:00, 3.74it/s] 95%|█████████▌| 19/20 [00:03<00:00, 4.00it/s] 100%|██████████| 20/20 [00:04<00:00, 4.18it/s] 100%|██████████| 20/20 [00:04<00:00, 4.78it/s] IterativeLatentUpscale[2/3]: 784.0x1194.7 (scale:2.33) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:10, 1.77it/s] 15%|█▌ | 3/20 [00:01<00:06, 2.59it/s] 20%|██ | 4/20 [00:01<00:07, 2.22it/s] 25%|██▌ | 5/20 [00:02<00:07, 2.05it/s] 30%|███ | 6/20 [00:02<00:07, 1.95it/s] 35%|███▌ | 7/20 [00:03<00:06, 1.89it/s] 40%|████ | 8/20 [00:04<00:06, 1.85it/s] 45%|████▌ | 9/20 [00:04<00:05, 1.84it/s] 50%|█████ | 10/20 [00:05<00:05, 1.83it/s] 55%|█████▌ | 11/20 [00:05<00:04, 1.83it/s] 60%|██████ | 12/20 [00:06<00:04, 1.82it/s] 65%|██████▌ | 13/20 [00:06<00:03, 1.82it/s] 70%|███████ | 14/20 [00:07<00:03, 1.81it/s] 75%|███████▌ | 15/20 [00:07<00:02, 1.80it/s] 80%|████████ | 16/20 [00:08<00:02, 1.78it/s] 85%|████████▌ | 17/20 [00:09<00:01, 1.79it/s] 90%|█████████ | 18/20 [00:09<00:01, 1.80it/s] 95%|█████████▌| 19/20 [00:10<00:00, 1.80it/s] 100%|██████████| 20/20 [00:10<00:00, 1.68it/s] 100%|██████████| 20/20 [00:10<00:00, 1.84it/s] IterativeLatentUpscale[Final]: 1008.0x1536.0 (scale:3.00) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:08, 2.25it/s] 10%|█ | 2/20 [00:01<00:18, 1.02s/it] 15%|█▌ | 3/20 [00:03<00:20, 1.20s/it] 20%|██ | 4/20 [00:04<00:20, 1.28s/it] 25%|██▌ | 5/20 [00:06<00:19, 1.33s/it] 30%|███ | 6/20 [00:07<00:19, 1.36s/it] 35%|███▌ | 7/20 [00:08<00:17, 1.38s/it] 40%|████ | 8/20 [00:10<00:16, 1.40s/it] 45%|████▌ | 9/20 [00:11<00:15, 1.40s/it] 50%|█████ | 10/20 [00:13<00:14, 1.41s/it] 55%|█████▌ | 11/20 [00:14<00:12, 1.42s/it] 60%|██████ | 12/20 [00:16<00:11, 1.43s/it] 65%|██████▌ | 13/20 [00:17<00:10, 1.44s/it] 70%|███████ | 14/20 [00:18<00:08, 1.43s/it] 75%|███████▌ | 15/20 [00:20<00:07, 1.43s/it] 80%|████████ | 16/20 [00:21<00:05, 1.43s/it] 85%|████████▌ | 17/20 [00:23<00:04, 1.44s/it] 90%|█████████ | 18/20 [00:24<00:02, 1.44s/it] 95%|█████████▌| 19/20 [00:26<00:01, 1.44s/it] 100%|██████████| 20/20 [00:27<00:00, 1.44s/it] 100%|██████████| 20/20 [00:27<00:00, 1.38s/it] Prompt executed in 69.08 seconds node output: {'images': [{'filename': 'ComfyUI_temp_bmvsu_00001_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_bmvsu_00002_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_bmvsu_00003_.png', 'subfolder': '', 'type': 'temp'}]} temp temp temp node output: {'images': [{'filename': 'ComfyUI_00045_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00046_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00047_.png', 'subfolder': '', 'type': 'output'}]} output output output 3 images generated successfully
Prediction
bryantanjw/entropy-lol:7c951e2ad9afa4ab1a1f704cdabb807145980ac439ec19eb12429442a971d5a9IDdtmo47zbeuw2q5qznut3e5fhhmStatusSucceededSourceWebHardwareA100 (40GB)Total durationCreatedInput
- cfg
- 7
- lora
- gaming/Ahri.safetensors
- steps
- 20
- width
- 341
- height
- 512
- batch_size
- 2
- custom_lora
- input_prompt
- (best quality),(masterpiece:1.3),dragon,1girl,long hair,white hair,hair accessories,chinese clothing,red clothing,chinese architecture background,fireworks,beautiful and aesthetic,hires (an oriental dragon:1.2),
- sampler_name
- dpmpp_2m
- lora_strength
- 0
- upscale_factor
- 3.75
- negative_prompt
- (worst quality:1.4), (low quality:1.4), simple background, (bad anatomy)
- checkpoint_model
- DarkSushi.safetensors
{ "cfg": 7, "lora": "gaming/Ahri.safetensors", "steps": 20, "width": 341, "height": 512, "batch_size": 2, "custom_lora": "", "input_prompt": "(best quality),(masterpiece:1.3),dragon,1girl,long hair,white hair,hair accessories,chinese clothing,red clothing,chinese architecture background,fireworks,beautiful and aesthetic,hires (an oriental dragon:1.2),", "sampler_name": "dpmpp_2m", "lora_strength": 0, "upscale_factor": 3.75, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, (bad anatomy)", "checkpoint_model": "DarkSushi.safetensors" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "bryantanjw/entropy-lol:7c951e2ad9afa4ab1a1f704cdabb807145980ac439ec19eb12429442a971d5a9", { input: { cfg: 7, lora: "gaming/Ahri.safetensors", steps: 20, width: 341, height: 512, batch_size: 2, custom_lora: "", input_prompt: "(best quality),(masterpiece:1.3),dragon,1girl,long hair,white hair,hair accessories,chinese clothing,red clothing,chinese architecture background,fireworks,beautiful and aesthetic,hires (an oriental dragon:1.2),", sampler_name: "dpmpp_2m", lora_strength: 0, upscale_factor: 3.75, negative_prompt: "(worst quality:1.4), (low quality:1.4), simple background, (bad anatomy)", checkpoint_model: "DarkSushi.safetensors" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "bryantanjw/entropy-lol:7c951e2ad9afa4ab1a1f704cdabb807145980ac439ec19eb12429442a971d5a9", input={ "cfg": 7, "lora": "gaming/Ahri.safetensors", "steps": 20, "width": 341, "height": 512, "batch_size": 2, "custom_lora": "", "input_prompt": "(best quality),(masterpiece:1.3),dragon,1girl,long hair,white hair,hair accessories,chinese clothing,red clothing,chinese architecture background,fireworks,beautiful and aesthetic,hires (an oriental dragon:1.2),", "sampler_name": "dpmpp_2m", "lora_strength": 0, "upscale_factor": 3.75, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, (bad anatomy)", "checkpoint_model": "DarkSushi.safetensors" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run bryantanjw/entropy-lol using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "bryantanjw/entropy-lol:7c951e2ad9afa4ab1a1f704cdabb807145980ac439ec19eb12429442a971d5a9", "input": { "cfg": 7, "lora": "gaming/Ahri.safetensors", "steps": 20, "width": 341, "height": 512, "batch_size": 2, "custom_lora": "", "input_prompt": "(best quality),(masterpiece:1.3),dragon,1girl,long hair,white hair,hair accessories,chinese clothing,red clothing,chinese architecture background,fireworks,beautiful and aesthetic,hires (an oriental dragon:1.2),", "sampler_name": "dpmpp_2m", "lora_strength": 0, "upscale_factor": 3.75, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, (bad anatomy)", "checkpoint_model": "DarkSushi.safetensors" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-02-12T18:26:02.447451Z", "created_at": "2024-02-12T18:24:33.649518Z", "data_removed": false, "error": null, "id": "dtmo47zbeuw2q5qznut3e5fhhm", "input": { "cfg": 7, "lora": "gaming/Ahri.safetensors", "steps": 20, "width": 341, "height": 512, "batch_size": 2, "custom_lora": "", "input_prompt": "(best quality),(masterpiece:1.3),dragon,1girl,long hair,white hair,hair accessories,chinese clothing,red clothing,chinese architecture background,fireworks,beautiful and aesthetic,hires (an oriental dragon:1.2),", "sampler_name": "dpmpp_2m", "lora_strength": 0, "upscale_factor": 3.75, "negative_prompt": "(worst quality:1.4), (low quality:1.4), simple background, (bad anatomy)", "checkpoint_model": "DarkSushi.safetensors" }, "logs": "Using seed: 12995183\nNow downloading checkpoints model: DarkSushi.safetensors\nModel ComfyUI/models/checkpoints/DarkSushi.safetensors not found, downloading checkpoint model\nDownloaded model to ComfyUI/models/checkpoints/DarkSushi.safetensors\nUpscale model not found, downloading upscale model\nDownloaded upscale model to ComfyUI/models/upscale_models/RealESRGAN_x4plus.pth\nNow downloading loras model: gaming/Ahri.safetensors\nModel ComfyUI/models/loras/Ahri.safetensors not found, downloading checkpoint model\nDownloaded model to ComfyUI/models/loras/Ahri.safetensors\nUpscale model ComfyUI/models/upscale_models/RealESRGAN_x4plus.pth already exists, skipping download\ngot prompt\nmodel_type EPS\nadm 0\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nmissing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}\nleft over keys: dict_keys(['embedding_manager.embedder.transformer.text_model.embeddings.position_embedding.weight', 'embedding_manager.embedder.transformer.text_model.embeddings.position_ids', 'embedding_manager.embedder.transformer.text_model.embeddings.token_embedding.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.final_layer_norm.bias', 'embedding_manager.embedder.transformer.text_model.final_layer_norm.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_0_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_0_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_10_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_10_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_11_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_11_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_1_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_1_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_2_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_2_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_3_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_3_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_4_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_4_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_5_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_5_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_6_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_6_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_7_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_7_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_8_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_8_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_9_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_9_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_v_proj.lora_up.weight', 'model_ema.decay', 'model_ema.num_updates', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nLeftover VAE keys ['model_ema.decay', 'model_ema.num_updates']\nRequested to load SD1ClipModel\nLoading 1 new model\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:11, 1.68it/s]\n 15%|█▌ | 3/20 [00:00<00:03, 4.80it/s]\n 25%|██▌ | 5/20 [00:00<00:02, 7.28it/s]\n 35%|███▌ | 7/20 [00:01<00:01, 9.29it/s]\n 45%|████▌ | 9/20 [00:01<00:01, 10.49it/s]\n 55%|█████▌ | 11/20 [00:01<00:00, 11.70it/s]\n 65%|██████▌ | 13/20 [00:01<00:00, 12.56it/s]\n 75%|███████▌ | 15/20 [00:01<00:00, 13.19it/s]\n 85%|████████▌ | 17/20 [00:01<00:00, 13.64it/s]\n 95%|█████████▌| 19/20 [00:01<00:00, 13.76it/s]\n100%|██████████| 20/20 [00:01<00:00, 10.42it/s]\nRequested to load AutoencoderKL\nLoading 1 new model\nIterativeLatentUpscale[1/3]: 644.0x981.3 (scale:1.92) \n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:04, 4.17it/s]\n 15%|█▌ | 3/20 [00:00<00:02, 7.21it/s]\n 20%|██ | 4/20 [00:00<00:02, 7.55it/s]\n 25%|██▌ | 5/20 [00:00<00:01, 7.76it/s]\n 30%|███ | 6/20 [00:00<00:01, 8.00it/s]\n 35%|███▌ | 7/20 [00:00<00:01, 8.10it/s]\n 40%|████ | 8/20 [00:01<00:01, 8.18it/s]\n 45%|████▌ | 9/20 [00:01<00:01, 8.23it/s]\n 50%|█████ | 10/20 [00:01<00:01, 8.28it/s]\n 55%|█████▌ | 11/20 [00:01<00:01, 8.28it/s]\n 60%|██████ | 12/20 [00:01<00:00, 8.29it/s]\n 65%|██████▌ | 13/20 [00:01<00:00, 8.11it/s]\n 70%|███████ | 14/20 [00:01<00:00, 8.42it/s]\n 75%|███████▌ | 15/20 [00:01<00:00, 8.42it/s]\n 80%|████████ | 16/20 [00:02<00:00, 8.39it/s]\n 85%|████████▌ | 17/20 [00:02<00:00, 8.30it/s]\n 90%|█████████ | 18/20 [00:02<00:00, 8.35it/s]\n 95%|█████████▌| 19/20 [00:02<00:00, 8.37it/s]\n100%|██████████| 20/20 [00:02<00:00, 8.38it/s]\n100%|██████████| 20/20 [00:02<00:00, 8.06it/s]\nIterativeLatentUpscale[2/3]: 952.0x1450.7 (scale:2.83) \n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:04, 4.60it/s]\n 10%|█ | 2/20 [00:00<00:04, 3.69it/s]\n 15%|█▌ | 3/20 [00:00<00:05, 3.02it/s]\n 20%|██ | 4/20 [00:01<00:05, 2.77it/s]\n 25%|██▌ | 5/20 [00:01<00:05, 2.66it/s]\n 30%|███ | 6/20 [00:02<00:05, 2.59it/s]\n 35%|███▌ | 7/20 [00:02<00:05, 2.55it/s]\n 40%|████ | 8/20 [00:02<00:04, 2.49it/s]\n 45%|████▌ | 9/20 [00:03<00:04, 2.49it/s]\n 50%|█████ | 10/20 [00:03<00:04, 2.48it/s]\n 55%|█████▌ | 11/20 [00:04<00:03, 2.48it/s]\n 60%|██████ | 12/20 [00:04<00:03, 2.48it/s]\n 65%|██████▌ | 13/20 [00:04<00:02, 2.48it/s]\n 70%|███████ | 14/20 [00:05<00:02, 2.48it/s]\n 75%|███████▌ | 15/20 [00:05<00:02, 2.48it/s]\n 80%|████████ | 16/20 [00:06<00:01, 2.47it/s]\n 85%|████████▌ | 17/20 [00:06<00:01, 2.47it/s]\n 90%|█████████ | 18/20 [00:07<00:00, 2.47it/s]\n 95%|█████████▌| 19/20 [00:07<00:00, 2.47it/s]\n100%|██████████| 20/20 [00:07<00:00, 2.47it/s]\n100%|██████████| 20/20 [00:07<00:00, 2.56it/s]\nIterativeLatentUpscale[Final]: 1260.0x1920.0 (scale:3.75) \n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:06, 3.11it/s]\n 10%|█ | 2/20 [00:01<00:12, 1.41it/s]\n 15%|█▌ | 3/20 [00:02<00:14, 1.20it/s]\n 20%|██ | 4/20 [00:03<00:14, 1.12it/s]\n 25%|██▌ | 5/20 [00:04<00:13, 1.08it/s]\n 30%|███ | 6/20 [00:05<00:13, 1.06it/s]\n 35%|███▌ | 7/20 [00:06<00:12, 1.05it/s]\n 40%|████ | 8/20 [00:07<00:11, 1.04it/s]\n 45%|████▌ | 9/20 [00:08<00:10, 1.03it/s]\n 50%|█████ | 10/20 [00:09<00:09, 1.03it/s]\n 55%|█████▌ | 11/20 [00:10<00:08, 1.03it/s]\n 60%|██████ | 12/20 [00:11<00:07, 1.02it/s]\n 65%|██████▌ | 13/20 [00:12<00:06, 1.02it/s]\n 70%|███████ | 14/20 [00:13<00:05, 1.02it/s]\n 75%|███████▌ | 15/20 [00:14<00:04, 1.02it/s]\n 80%|████████ | 16/20 [00:15<00:03, 1.02it/s]\n 85%|████████▌ | 17/20 [00:16<00:02, 1.02it/s]\n 90%|█████████ | 18/20 [00:16<00:01, 1.02it/s]\n 95%|█████████▌| 19/20 [00:17<00:00, 1.02it/s]\n100%|██████████| 20/20 [00:18<00:00, 1.02it/s]\n100%|██████████| 20/20 [00:18<00:00, 1.05it/s]\nPrompt executed in 48.56 seconds\nnode output: {'images': [{'filename': 'ComfyUI_temp_sudes_00001_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_sudes_00002_.png', 'subfolder': '', 'type': 'temp'}]}\ntemp\ntemp\nnode output: {'images': [{'filename': 'ComfyUI_00045_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00046_.png', 'subfolder': '', 'type': 'output'}]}\noutput\noutput\n2 images generated successfully", "metrics": { "predict_time": 62.249355, "total_time": 88.797933 }, "output": [ "https://replicate.delivery/pbxt/soQPGGsf5bRoaSYNoNly7A6tjozZp39m1ivjNI4Mehe1lCskA/out-0.png", "https://replicate.delivery/pbxt/sUHrrAfxy41mY6v4SfUzAblluefDEzMK8ye7fObEcDsguUglE/out-1.png" ], "started_at": "2024-02-12T18:25:00.198096Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/dtmo47zbeuw2q5qznut3e5fhhm", "cancel": "https://api.replicate.com/v1/predictions/dtmo47zbeuw2q5qznut3e5fhhm/cancel" }, "version": "7c951e2ad9afa4ab1a1f704cdabb807145980ac439ec19eb12429442a971d5a9" }
Generated inUsing seed: 12995183 Now downloading checkpoints model: DarkSushi.safetensors Model ComfyUI/models/checkpoints/DarkSushi.safetensors not found, downloading checkpoint model Downloaded model to ComfyUI/models/checkpoints/DarkSushi.safetensors Upscale model not found, downloading upscale model Downloaded upscale model to ComfyUI/models/upscale_models/RealESRGAN_x4plus.pth Now downloading loras model: gaming/Ahri.safetensors Model ComfyUI/models/loras/Ahri.safetensors not found, downloading checkpoint model Downloaded model to ComfyUI/models/loras/Ahri.safetensors Upscale model ComfyUI/models/upscale_models/RealESRGAN_x4plus.pth already exists, skipping download got prompt model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['embedding_manager.embedder.transformer.text_model.embeddings.position_embedding.weight', 'embedding_manager.embedder.transformer.text_model.embeddings.position_ids', 'embedding_manager.embedder.transformer.text_model.embeddings.token_embedding.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.final_layer_norm.bias', 'embedding_manager.embedder.transformer.text_model.final_layer_norm.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_0_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_0_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_10_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_10_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_11_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_11_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_1_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_1_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_2_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_2_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_3_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_3_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_4_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_4_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_5_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_5_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_6_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_6_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_7_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_7_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_8_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_8_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_9_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_9_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_v_proj.lora_up.weight', 'model_ema.decay', 'model_ema.num_updates', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE Leftover VAE keys ['model_ema.decay', 'model_ema.num_updates'] Requested to load SD1ClipModel Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:11, 1.68it/s] 15%|█▌ | 3/20 [00:00<00:03, 4.80it/s] 25%|██▌ | 5/20 [00:00<00:02, 7.28it/s] 35%|███▌ | 7/20 [00:01<00:01, 9.29it/s] 45%|████▌ | 9/20 [00:01<00:01, 10.49it/s] 55%|█████▌ | 11/20 [00:01<00:00, 11.70it/s] 65%|██████▌ | 13/20 [00:01<00:00, 12.56it/s] 75%|███████▌ | 15/20 [00:01<00:00, 13.19it/s] 85%|████████▌ | 17/20 [00:01<00:00, 13.64it/s] 95%|█████████▌| 19/20 [00:01<00:00, 13.76it/s] 100%|██████████| 20/20 [00:01<00:00, 10.42it/s] Requested to load AutoencoderKL Loading 1 new model IterativeLatentUpscale[1/3]: 644.0x981.3 (scale:1.92) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:04, 4.17it/s] 15%|█▌ | 3/20 [00:00<00:02, 7.21it/s] 20%|██ | 4/20 [00:00<00:02, 7.55it/s] 25%|██▌ | 5/20 [00:00<00:01, 7.76it/s] 30%|███ | 6/20 [00:00<00:01, 8.00it/s] 35%|███▌ | 7/20 [00:00<00:01, 8.10it/s] 40%|████ | 8/20 [00:01<00:01, 8.18it/s] 45%|████▌ | 9/20 [00:01<00:01, 8.23it/s] 50%|█████ | 10/20 [00:01<00:01, 8.28it/s] 55%|█████▌ | 11/20 [00:01<00:01, 8.28it/s] 60%|██████ | 12/20 [00:01<00:00, 8.29it/s] 65%|██████▌ | 13/20 [00:01<00:00, 8.11it/s] 70%|███████ | 14/20 [00:01<00:00, 8.42it/s] 75%|███████▌ | 15/20 [00:01<00:00, 8.42it/s] 80%|████████ | 16/20 [00:02<00:00, 8.39it/s] 85%|████████▌ | 17/20 [00:02<00:00, 8.30it/s] 90%|█████████ | 18/20 [00:02<00:00, 8.35it/s] 95%|█████████▌| 19/20 [00:02<00:00, 8.37it/s] 100%|██████████| 20/20 [00:02<00:00, 8.38it/s] 100%|██████████| 20/20 [00:02<00:00, 8.06it/s] IterativeLatentUpscale[2/3]: 952.0x1450.7 (scale:2.83) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:04, 4.60it/s] 10%|█ | 2/20 [00:00<00:04, 3.69it/s] 15%|█▌ | 3/20 [00:00<00:05, 3.02it/s] 20%|██ | 4/20 [00:01<00:05, 2.77it/s] 25%|██▌ | 5/20 [00:01<00:05, 2.66it/s] 30%|███ | 6/20 [00:02<00:05, 2.59it/s] 35%|███▌ | 7/20 [00:02<00:05, 2.55it/s] 40%|████ | 8/20 [00:02<00:04, 2.49it/s] 45%|████▌ | 9/20 [00:03<00:04, 2.49it/s] 50%|█████ | 10/20 [00:03<00:04, 2.48it/s] 55%|█████▌ | 11/20 [00:04<00:03, 2.48it/s] 60%|██████ | 12/20 [00:04<00:03, 2.48it/s] 65%|██████▌ | 13/20 [00:04<00:02, 2.48it/s] 70%|███████ | 14/20 [00:05<00:02, 2.48it/s] 75%|███████▌ | 15/20 [00:05<00:02, 2.48it/s] 80%|████████ | 16/20 [00:06<00:01, 2.47it/s] 85%|████████▌ | 17/20 [00:06<00:01, 2.47it/s] 90%|█████████ | 18/20 [00:07<00:00, 2.47it/s] 95%|█████████▌| 19/20 [00:07<00:00, 2.47it/s] 100%|██████████| 20/20 [00:07<00:00, 2.47it/s] 100%|██████████| 20/20 [00:07<00:00, 2.56it/s] IterativeLatentUpscale[Final]: 1260.0x1920.0 (scale:3.75) 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:06, 3.11it/s] 10%|█ | 2/20 [00:01<00:12, 1.41it/s] 15%|█▌ | 3/20 [00:02<00:14, 1.20it/s] 20%|██ | 4/20 [00:03<00:14, 1.12it/s] 25%|██▌ | 5/20 [00:04<00:13, 1.08it/s] 30%|███ | 6/20 [00:05<00:13, 1.06it/s] 35%|███▌ | 7/20 [00:06<00:12, 1.05it/s] 40%|████ | 8/20 [00:07<00:11, 1.04it/s] 45%|████▌ | 9/20 [00:08<00:10, 1.03it/s] 50%|█████ | 10/20 [00:09<00:09, 1.03it/s] 55%|█████▌ | 11/20 [00:10<00:08, 1.03it/s] 60%|██████ | 12/20 [00:11<00:07, 1.02it/s] 65%|██████▌ | 13/20 [00:12<00:06, 1.02it/s] 70%|███████ | 14/20 [00:13<00:05, 1.02it/s] 75%|███████▌ | 15/20 [00:14<00:04, 1.02it/s] 80%|████████ | 16/20 [00:15<00:03, 1.02it/s] 85%|████████▌ | 17/20 [00:16<00:02, 1.02it/s] 90%|█████████ | 18/20 [00:16<00:01, 1.02it/s] 95%|█████████▌| 19/20 [00:17<00:00, 1.02it/s] 100%|██████████| 20/20 [00:18<00:00, 1.02it/s] 100%|██████████| 20/20 [00:18<00:00, 1.05it/s] Prompt executed in 48.56 seconds node output: {'images': [{'filename': 'ComfyUI_temp_sudes_00001_.png', 'subfolder': '', 'type': 'temp'}, {'filename': 'ComfyUI_temp_sudes_00002_.png', 'subfolder': '', 'type': 'temp'}]} temp temp node output: {'images': [{'filename': 'ComfyUI_00045_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00046_.png', 'subfolder': '', 'type': 'output'}]} output output 2 images generated successfully
Want to make some of these yourself?
Run this model