lucataco / ssd-lora-inference
POC to run inference on SSD-1B LoRAs
- Public
- 2.7K runs
-
L40S
- GitHub
Prediction
lucataco/ssd-lora-inference:0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1IDyd6z4elbb7s2ws2yc2s6azz2juStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @lucatacoInput
- seed
- 37543
- width
- 1024
- height
- 1024
- prompt
- A photo of TOK
- refine
- no_refiner
- lora_url
- https://replicate.delivery/pbxt/u5hevTlT560fI0D1TYxwozJk3gEJHAjVCubvbzngsaeoIIqjA/trained_model.tar
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 25
{ "seed": 37543, "width": 1024, "height": 1024, "prompt": "A photo of TOK", "refine": "no_refiner", "lora_url": "https://replicate.delivery/pbxt/u5hevTlT560fI0D1TYxwozJk3gEJHAjVCubvbzngsaeoIIqjA/trained_model.tar", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 25 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/ssd-lora-inference using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/ssd-lora-inference:0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1", { input: { seed: 37543, width: 1024, height: 1024, prompt: "A photo of TOK", refine: "no_refiner", lora_url: "https://replicate.delivery/pbxt/u5hevTlT560fI0D1TYxwozJk3gEJHAjVCubvbzngsaeoIIqjA/trained_model.tar", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 25 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run lucataco/ssd-lora-inference using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/ssd-lora-inference:0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1", input={ "seed": 37543, "width": 1024, "height": 1024, "prompt": "A photo of TOK", "refine": "no_refiner", "lora_url": "https://replicate.delivery/pbxt/u5hevTlT560fI0D1TYxwozJk3gEJHAjVCubvbzngsaeoIIqjA/trained_model.tar", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 25 } ) # To access the file URL: print(output[0].url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output[0].read())
To learn more, take a look at the guide on getting started with Python.
Run lucataco/ssd-lora-inference using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "lucataco/ssd-lora-inference:0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1", "input": { "seed": 37543, "width": 1024, "height": 1024, "prompt": "A photo of TOK", "refine": "no_refiner", "lora_url": "https://replicate.delivery/pbxt/u5hevTlT560fI0D1TYxwozJk3gEJHAjVCubvbzngsaeoIIqjA/trained_model.tar", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 25 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/lucataco/ssd-lora-inference@sha256:0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1 \ -i 'seed=37543' \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A photo of TOK"' \ -i 'refine="no_refiner"' \ -i 'lora_url="https://replicate.delivery/pbxt/u5hevTlT560fI0D1TYxwozJk3gEJHAjVCubvbzngsaeoIIqjA/trained_model.tar"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.6' \ -i 'num_outputs=1' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.8' \ -i 'num_inference_steps=25'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/lucataco/ssd-lora-inference@sha256:0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "seed": 37543, "width": 1024, "height": 1024, "prompt": "A photo of TOK", "refine": "no_refiner", "lora_url": "https://replicate.delivery/pbxt/u5hevTlT560fI0D1TYxwozJk3gEJHAjVCubvbzngsaeoIIqjA/trained_model.tar", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 25 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-11-05T03:53:05.162489Z", "created_at": "2023-11-05T03:52:50.827347Z", "data_removed": false, "error": null, "id": "yd6z4elbb7s2ws2yc2s6azz2ju", "input": { "seed": 37543, "width": 1024, "height": 1024, "prompt": "A photo of TOK", "refine": "no_refiner", "lora_url": "https://replicate.delivery/pbxt/u5hevTlT560fI0D1TYxwozJk3gEJHAjVCubvbzngsaeoIIqjA/trained_model.tar", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 25 }, "logs": "LORA\nLoading ssd txt2img pipeline...\nLoading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]\nLoading pipeline components...: 14%|█▍ | 1/7 [00:00<00:01, 3.64it/s]\nLoading pipeline components...: 57%|█████▋ | 4/7 [00:00<00:00, 8.12it/s]\nLoading pipeline components...: 71%|███████▏ | 5/7 [00:00<00:00, 6.07it/s]\nLoading pipeline components...: 100%|██████████| 7/7 [00:00<00:00, 8.21it/s]\nLoading ssd lora weights...\nLoading fine-tuned model\nDoes not have Unet. Assume we are using LoRA\nLoading Unet LoRA\nUsing seed: 37543\nPrompt: A photo of <s0><s1>\ntxt2img mode\n 0%| | 0/25 [00:00<?, ?it/s]\n 4%|▍ | 1/25 [00:00<00:04, 5.95it/s]\n 8%|▊ | 2/25 [00:00<00:03, 5.98it/s]\n 12%|█▏ | 3/25 [00:00<00:03, 6.00it/s]\n 16%|█▌ | 4/25 [00:00<00:03, 5.99it/s]\n 20%|██ | 5/25 [00:00<00:03, 5.98it/s]\n 24%|██▍ | 6/25 [00:01<00:03, 5.99it/s]\n 28%|██▊ | 7/25 [00:01<00:02, 6.00it/s]\n 32%|███▏ | 8/25 [00:01<00:02, 6.01it/s]\n 36%|███▌ | 9/25 [00:01<00:02, 6.02it/s]\n 40%|████ | 10/25 [00:01<00:02, 6.02it/s]\n 44%|████▍ | 11/25 [00:01<00:02, 6.02it/s]\n 48%|████▊ | 12/25 [00:01<00:02, 6.02it/s]\n 52%|█████▏ | 13/25 [00:02<00:01, 6.02it/s]\n 56%|█████▌ | 14/25 [00:02<00:01, 6.02it/s]\n 60%|██████ | 15/25 [00:02<00:01, 6.02it/s]\n 64%|██████▍ | 16/25 [00:02<00:01, 6.03it/s]\n 68%|██████▊ | 17/25 [00:02<00:01, 6.03it/s]\n 72%|███████▏ | 18/25 [00:02<00:01, 6.02it/s]\n 76%|███████▌ | 19/25 [00:03<00:00, 6.02it/s]\n 80%|████████ | 20/25 [00:03<00:00, 6.02it/s]\n 84%|████████▍ | 21/25 [00:03<00:00, 6.02it/s]\n 88%|████████▊ | 22/25 [00:03<00:00, 6.02it/s]\n 92%|█████████▏| 23/25 [00:03<00:00, 6.01it/s]\n 96%|█████████▌| 24/25 [00:03<00:00, 6.00it/s]\n100%|██████████| 25/25 [00:04<00:00, 6.01it/s]\n100%|██████████| 25/25 [00:04<00:00, 6.01it/s]", "metrics": { "predict_time": 14.34316, "total_time": 14.335142 }, "output": [ "https://replicate.delivery/pbxt/Fe6CXvT5znwiT6EYQOQsc7jVjeuFINEBRXb2KSWtuO4gOM1RA/out-0.png" ], "started_at": "2023-11-05T03:52:50.819329Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/yd6z4elbb7s2ws2yc2s6azz2ju", "cancel": "https://api.replicate.com/v1/predictions/yd6z4elbb7s2ws2yc2s6azz2ju/cancel" }, "version": "0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1" }
Generated inLORA Loading ssd txt2img pipeline... Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s] Loading pipeline components...: 14%|█▍ | 1/7 [00:00<00:01, 3.64it/s] Loading pipeline components...: 57%|█████▋ | 4/7 [00:00<00:00, 8.12it/s] Loading pipeline components...: 71%|███████▏ | 5/7 [00:00<00:00, 6.07it/s] Loading pipeline components...: 100%|██████████| 7/7 [00:00<00:00, 8.21it/s] Loading ssd lora weights... Loading fine-tuned model Does not have Unet. Assume we are using LoRA Loading Unet LoRA Using seed: 37543 Prompt: A photo of <s0><s1> txt2img mode 0%| | 0/25 [00:00<?, ?it/s] 4%|▍ | 1/25 [00:00<00:04, 5.95it/s] 8%|▊ | 2/25 [00:00<00:03, 5.98it/s] 12%|█▏ | 3/25 [00:00<00:03, 6.00it/s] 16%|█▌ | 4/25 [00:00<00:03, 5.99it/s] 20%|██ | 5/25 [00:00<00:03, 5.98it/s] 24%|██▍ | 6/25 [00:01<00:03, 5.99it/s] 28%|██▊ | 7/25 [00:01<00:02, 6.00it/s] 32%|███▏ | 8/25 [00:01<00:02, 6.01it/s] 36%|███▌ | 9/25 [00:01<00:02, 6.02it/s] 40%|████ | 10/25 [00:01<00:02, 6.02it/s] 44%|████▍ | 11/25 [00:01<00:02, 6.02it/s] 48%|████▊ | 12/25 [00:01<00:02, 6.02it/s] 52%|█████▏ | 13/25 [00:02<00:01, 6.02it/s] 56%|█████▌ | 14/25 [00:02<00:01, 6.02it/s] 60%|██████ | 15/25 [00:02<00:01, 6.02it/s] 64%|██████▍ | 16/25 [00:02<00:01, 6.03it/s] 68%|██████▊ | 17/25 [00:02<00:01, 6.03it/s] 72%|███████▏ | 18/25 [00:02<00:01, 6.02it/s] 76%|███████▌ | 19/25 [00:03<00:00, 6.02it/s] 80%|████████ | 20/25 [00:03<00:00, 6.02it/s] 84%|████████▍ | 21/25 [00:03<00:00, 6.02it/s] 88%|████████▊ | 22/25 [00:03<00:00, 6.02it/s] 92%|█████████▏| 23/25 [00:03<00:00, 6.01it/s] 96%|█████████▌| 24/25 [00:03<00:00, 6.00it/s] 100%|██████████| 25/25 [00:04<00:00, 6.01it/s] 100%|██████████| 25/25 [00:04<00:00, 6.01it/s]
Prediction
lucataco/ssd-lora-inference:0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1IDwxwhhy3bsm4uqwmhkpor5ijjbqStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- seed
- 29244
- width
- 1024
- height
- 1024
- prompt
- A photo of TOK
- refine
- no_refiner
- lora_url
- https://replicate.delivery/pbxt/mH8Z4rxWy3LEJBlnOSpE80iGruPK3bgQerAZEJ5PKWOxDi6IA/trained_model.tar
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 25
{ "seed": 29244, "width": 1024, "height": 1024, "prompt": "A photo of TOK", "refine": "no_refiner", "lora_url": "https://replicate.delivery/pbxt/mH8Z4rxWy3LEJBlnOSpE80iGruPK3bgQerAZEJ5PKWOxDi6IA/trained_model.tar", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 25 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/ssd-lora-inference using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/ssd-lora-inference:0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1", { input: { seed: 29244, width: 1024, height: 1024, prompt: "A photo of TOK", refine: "no_refiner", lora_url: "https://replicate.delivery/pbxt/mH8Z4rxWy3LEJBlnOSpE80iGruPK3bgQerAZEJ5PKWOxDi6IA/trained_model.tar", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 25 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run lucataco/ssd-lora-inference using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/ssd-lora-inference:0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1", input={ "seed": 29244, "width": 1024, "height": 1024, "prompt": "A photo of TOK", "refine": "no_refiner", "lora_url": "https://replicate.delivery/pbxt/mH8Z4rxWy3LEJBlnOSpE80iGruPK3bgQerAZEJ5PKWOxDi6IA/trained_model.tar", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 25 } ) # To access the file URL: print(output[0].url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output[0].read())
To learn more, take a look at the guide on getting started with Python.
Run lucataco/ssd-lora-inference using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "lucataco/ssd-lora-inference:0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1", "input": { "seed": 29244, "width": 1024, "height": 1024, "prompt": "A photo of TOK", "refine": "no_refiner", "lora_url": "https://replicate.delivery/pbxt/mH8Z4rxWy3LEJBlnOSpE80iGruPK3bgQerAZEJ5PKWOxDi6IA/trained_model.tar", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 25 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/lucataco/ssd-lora-inference@sha256:0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1 \ -i 'seed=29244' \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A photo of TOK"' \ -i 'refine="no_refiner"' \ -i 'lora_url="https://replicate.delivery/pbxt/mH8Z4rxWy3LEJBlnOSpE80iGruPK3bgQerAZEJ5PKWOxDi6IA/trained_model.tar"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.6' \ -i 'num_outputs=1' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.8' \ -i 'num_inference_steps=25'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/lucataco/ssd-lora-inference@sha256:0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "seed": 29244, "width": 1024, "height": 1024, "prompt": "A photo of TOK", "refine": "no_refiner", "lora_url": "https://replicate.delivery/pbxt/mH8Z4rxWy3LEJBlnOSpE80iGruPK3bgQerAZEJ5PKWOxDi6IA/trained_model.tar", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 25 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-11-05T03:53:43.008176Z", "created_at": "2023-11-05T03:53:28.992624Z", "data_removed": false, "error": null, "id": "wxwhhy3bsm4uqwmhkpor5ijjbq", "input": { "seed": 29244, "width": 1024, "height": 1024, "prompt": "A photo of TOK", "refine": "no_refiner", "lora_url": "https://replicate.delivery/pbxt/mH8Z4rxWy3LEJBlnOSpE80iGruPK3bgQerAZEJ5PKWOxDi6IA/trained_model.tar", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 25 }, "logs": "LORA\nLoading ssd txt2img pipeline...\nLoading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]\nLoading pipeline components...: 14%|█▍ | 1/7 [00:00<00:01, 3.67it/s]\nLoading pipeline components...: 57%|█████▋ | 4/7 [00:00<00:00, 8.09it/s]\nLoading pipeline components...: 71%|███████▏ | 5/7 [00:00<00:00, 6.11it/s]\nLoading pipeline components...: 100%|██████████| 7/7 [00:00<00:00, 8.25it/s]\nLoading ssd lora weights...\nLoading fine-tuned model\nDoes not have Unet. Assume we are using LoRA\nLoading Unet LoRA\nUsing seed: 29244\nPrompt: A photo of <s0><s1>\ntxt2img mode\n 0%| | 0/25 [00:00<?, ?it/s]\n 4%|▍ | 1/25 [00:00<00:04, 6.00it/s]\n 8%|▊ | 2/25 [00:00<00:03, 6.01it/s]\n 12%|█▏ | 3/25 [00:00<00:03, 6.01it/s]\n 16%|█▌ | 4/25 [00:00<00:03, 6.01it/s]\n 20%|██ | 5/25 [00:00<00:03, 6.00it/s]\n 24%|██▍ | 6/25 [00:00<00:03, 6.00it/s]\n 28%|██▊ | 7/25 [00:01<00:02, 6.01it/s]\n 32%|███▏ | 8/25 [00:01<00:02, 6.02it/s]\n 36%|███▌ | 9/25 [00:01<00:02, 6.02it/s]\n 40%|████ | 10/25 [00:01<00:02, 6.02it/s]\n 44%|████▍ | 11/25 [00:01<00:02, 6.02it/s]\n 48%|████▊ | 12/25 [00:01<00:02, 6.02it/s]\n 52%|█████▏ | 13/25 [00:02<00:01, 6.02it/s]\n 56%|█████▌ | 14/25 [00:02<00:01, 6.02it/s]\n 60%|██████ | 15/25 [00:02<00:01, 6.02it/s]\n 64%|██████▍ | 16/25 [00:02<00:01, 6.01it/s]\n 68%|██████▊ | 17/25 [00:02<00:01, 6.01it/s]\n 72%|███████▏ | 18/25 [00:02<00:01, 6.01it/s]\n 76%|███████▌ | 19/25 [00:03<00:00, 6.01it/s]\n 80%|████████ | 20/25 [00:03<00:00, 6.01it/s]\n 84%|████████▍ | 21/25 [00:03<00:00, 6.01it/s]\n 88%|████████▊ | 22/25 [00:03<00:00, 6.01it/s]\n 92%|█████████▏| 23/25 [00:03<00:00, 6.01it/s]\n 96%|█████████▌| 24/25 [00:03<00:00, 6.02it/s]\n100%|██████████| 25/25 [00:04<00:00, 6.02it/s]\n100%|██████████| 25/25 [00:04<00:00, 6.01it/s]", "metrics": { "predict_time": 13.953547, "total_time": 14.015552 }, "output": [ "https://replicate.delivery/pbxt/5epZwt3FXPWUWCcvDfJXg5yGrhfGJxEUM1X59Lb1qowMewUHB/out-0.png" ], "started_at": "2023-11-05T03:53:29.054629Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/wxwhhy3bsm4uqwmhkpor5ijjbq", "cancel": "https://api.replicate.com/v1/predictions/wxwhhy3bsm4uqwmhkpor5ijjbq/cancel" }, "version": "0d087dce1ad5201881adca6837faa95957d551ec15e56c9f2eb8f46c348089d1" }
Generated inLORA Loading ssd txt2img pipeline... Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s] Loading pipeline components...: 14%|█▍ | 1/7 [00:00<00:01, 3.67it/s] Loading pipeline components...: 57%|█████▋ | 4/7 [00:00<00:00, 8.09it/s] Loading pipeline components...: 71%|███████▏ | 5/7 [00:00<00:00, 6.11it/s] Loading pipeline components...: 100%|██████████| 7/7 [00:00<00:00, 8.25it/s] Loading ssd lora weights... Loading fine-tuned model Does not have Unet. Assume we are using LoRA Loading Unet LoRA Using seed: 29244 Prompt: A photo of <s0><s1> txt2img mode 0%| | 0/25 [00:00<?, ?it/s] 4%|▍ | 1/25 [00:00<00:04, 6.00it/s] 8%|▊ | 2/25 [00:00<00:03, 6.01it/s] 12%|█▏ | 3/25 [00:00<00:03, 6.01it/s] 16%|█▌ | 4/25 [00:00<00:03, 6.01it/s] 20%|██ | 5/25 [00:00<00:03, 6.00it/s] 24%|██▍ | 6/25 [00:00<00:03, 6.00it/s] 28%|██▊ | 7/25 [00:01<00:02, 6.01it/s] 32%|███▏ | 8/25 [00:01<00:02, 6.02it/s] 36%|███▌ | 9/25 [00:01<00:02, 6.02it/s] 40%|████ | 10/25 [00:01<00:02, 6.02it/s] 44%|████▍ | 11/25 [00:01<00:02, 6.02it/s] 48%|████▊ | 12/25 [00:01<00:02, 6.02it/s] 52%|█████▏ | 13/25 [00:02<00:01, 6.02it/s] 56%|█████▌ | 14/25 [00:02<00:01, 6.02it/s] 60%|██████ | 15/25 [00:02<00:01, 6.02it/s] 64%|██████▍ | 16/25 [00:02<00:01, 6.01it/s] 68%|██████▊ | 17/25 [00:02<00:01, 6.01it/s] 72%|███████▏ | 18/25 [00:02<00:01, 6.01it/s] 76%|███████▌ | 19/25 [00:03<00:00, 6.01it/s] 80%|████████ | 20/25 [00:03<00:00, 6.01it/s] 84%|████████▍ | 21/25 [00:03<00:00, 6.01it/s] 88%|████████▊ | 22/25 [00:03<00:00, 6.01it/s] 92%|█████████▏| 23/25 [00:03<00:00, 6.01it/s] 96%|█████████▌| 24/25 [00:03<00:00, 6.02it/s] 100%|██████████| 25/25 [00:04<00:00, 6.02it/s] 100%|██████████| 25/25 [00:04<00:00, 6.01it/s]
Want to make some of these yourself?
Run this model