lightweight-ai
/
model3_4
- Public
- 33.2K runs
-
L40S
Prediction
lightweight-ai/model3_4:62e2de66ab225b2eb97ed11c740021a151a3b637ae8ecbce11714484403a2f9aIDk274gb1ryhrme0cmg9htvxac90StatusSucceededSourceWebHardwareL40STotal durationCreatedInput
- loras
- []
- width
- 1024
- height
- 1024
- prompt
- A bohemian-style female travel blogger with sun-kissed skin and messy beach waves
- inpaint
- scheduler
- K_EULER
- lora_scales
- []
- num_outputs
- 1
- output_format
- png
- guidance_scale
- 3.5
- output_quality
- 100
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 28
{ "loras": [], "width": 1024, "height": 1024, "prompt": "A bohemian-style female travel blogger with sun-kissed skin and messy beach waves", "inpaint": false, "scheduler": "K_EULER", "lora_scales": [], "num_outputs": 1, "output_format": "png", "guidance_scale": 3.5, "output_quality": 100, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lightweight-ai/model3_4 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lightweight-ai/model3_4:62e2de66ab225b2eb97ed11c740021a151a3b637ae8ecbce11714484403a2f9a", { input: { loras: [], width: 1024, height: 1024, prompt: "A bohemian-style female travel blogger with sun-kissed skin and messy beach waves", inpaint: false, scheduler: "K_EULER", lora_scales: [], num_outputs: 1, output_format: "png", guidance_scale: 3.5, output_quality: 100, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 28 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run lightweight-ai/model3_4 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lightweight-ai/model3_4:62e2de66ab225b2eb97ed11c740021a151a3b637ae8ecbce11714484403a2f9a", input={ "loras": [], "width": 1024, "height": 1024, "prompt": "A bohemian-style female travel blogger with sun-kissed skin and messy beach waves", "inpaint": False, "scheduler": "K_EULER", "lora_scales": [], "num_outputs": 1, "output_format": "png", "guidance_scale": 3.5, "output_quality": 100, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run lightweight-ai/model3_4 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "62e2de66ab225b2eb97ed11c740021a151a3b637ae8ecbce11714484403a2f9a", "input": { "loras": [], "width": 1024, "height": 1024, "prompt": "A bohemian-style female travel blogger with sun-kissed skin and messy beach waves", "inpaint": false, "scheduler": "K_EULER", "lora_scales": [], "num_outputs": 1, "output_format": "png", "guidance_scale": 3.5, "output_quality": 100, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/lightweight-ai/model3_4@sha256:62e2de66ab225b2eb97ed11c740021a151a3b637ae8ecbce11714484403a2f9a \ -i 'loras=[]' \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A bohemian-style female travel blogger with sun-kissed skin and messy beach waves"' \ -i 'inpaint=false' \ -i 'scheduler="K_EULER"' \ -i 'lora_scales=[]' \ -i 'num_outputs=1' \ -i 'output_format="png"' \ -i 'guidance_scale=3.5' \ -i 'output_quality=100' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.8' \ -i 'num_inference_steps=28'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/lightweight-ai/model3_4@sha256:62e2de66ab225b2eb97ed11c740021a151a3b637ae8ecbce11714484403a2f9a
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "loras": [], "width": 1024, "height": 1024, "prompt": "A bohemian-style female travel blogger with sun-kissed skin and messy beach waves", "inpaint": false, "scheduler": "K_EULER", "lora_scales": [], "num_outputs": 1, "output_format": "png", "guidance_scale": 3.5, "output_quality": 100, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 28 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2025-01-20T06:32:11.677885Z", "created_at": "2025-01-20T06:25:28.308000Z", "data_removed": false, "error": null, "id": "k274gb1ryhrme0cmg9htvxac90", "input": { "loras": [], "width": 1024, "height": 1024, "prompt": "A bohemian-style female travel blogger with sun-kissed skin and messy beach waves", "inpaint": false, "scheduler": "K_EULER", "lora_scales": [], "num_outputs": 1, "output_format": "png", "guidance_scale": 3.5, "output_quality": 100, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 28 }, "logs": "Model base : https://sg-model-store.s3.ap-northeast-2.amazonaws.com/SDXL/base/sd_xl_base_1.0.safetensors\nUsing seed: 42889\nPrompt: A bohemian-style female travel blogger with sun-kissed skin and messy beach waves\n 0%| | 0/28 [00:00<?, ?it/s]\n 4%|▎ | 1/28 [00:00<00:08, 3.29it/s]\n 11%|█ | 3/28 [00:00<00:03, 6.97it/s]\n 14%|█▍ | 4/28 [00:00<00:03, 7.40it/s]\n 18%|█▊ | 5/28 [00:00<00:02, 7.69it/s]\n 21%|██▏ | 6/28 [00:00<00:02, 7.90it/s]\n 25%|██▌ | 7/28 [00:00<00:02, 8.04it/s]\n 29%|██▊ | 8/28 [00:01<00:02, 8.15it/s]\n 32%|███▏ | 9/28 [00:01<00:02, 8.22it/s]\n 36%|███▌ | 10/28 [00:01<00:02, 8.26it/s]\n 39%|███▉ | 11/28 [00:01<00:02, 8.29it/s]\n 43%|████▎ | 12/28 [00:01<00:01, 8.31it/s]\n 46%|████▋ | 13/28 [00:01<00:01, 8.33it/s]\n 50%|█████ | 14/28 [00:01<00:01, 8.33it/s]\n 54%|█████▎ | 15/28 [00:01<00:01, 8.33it/s]\n 57%|█████▋ | 16/28 [00:02<00:01, 8.33it/s]\n 61%|██████ | 17/28 [00:02<00:01, 8.33it/s]\n 64%|██████▍ | 18/28 [00:02<00:01, 8.33it/s]\n 68%|██████▊ | 19/28 [00:02<00:01, 8.33it/s]\n 71%|███████▏ | 20/28 [00:02<00:00, 8.34it/s]\n 75%|███████▌ | 21/28 [00:02<00:00, 8.33it/s]\n 79%|███████▊ | 22/28 [00:02<00:00, 8.33it/s]\n 82%|████████▏ | 23/28 [00:02<00:00, 8.34it/s]\n 86%|████████▌ | 24/28 [00:02<00:00, 8.34it/s]\n 89%|████████▉ | 25/28 [00:03<00:00, 8.34it/s]\n 93%|█████████▎| 26/28 [00:03<00:00, 8.33it/s]\n 96%|█████████▋| 27/28 [00:03<00:00, 8.32it/s]\n100%|██████████| 28/28 [00:03<00:00, 8.31it/s]\n100%|██████████| 28/28 [00:03<00:00, 8.06it/s]\nGPU 0: NVIDIA L40S\nMemory Usage: 12141.0MB / 46068.0MB\nGPU Utilization: 99.0%", "metrics": { "predict_time": 4.352283725, "total_time": 403.369885 }, "output": [ "https://replicate.delivery/xezq/qzIXsJcguS6QF13smtXEQPfA9pfY6Ne5D1pSW6J0Om9WfnbQB/out-0.png" ], "started_at": "2025-01-20T06:32:07.325601Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-zl7murcklvwabnhcndjxt3w4ggpbusdyjbsxbvpcfhpi3tlkflwa", "get": "https://api.replicate.com/v1/predictions/k274gb1ryhrme0cmg9htvxac90", "cancel": "https://api.replicate.com/v1/predictions/k274gb1ryhrme0cmg9htvxac90/cancel" }, "version": "62e2de66ab225b2eb97ed11c740021a151a3b637ae8ecbce11714484403a2f9a" }
Generated inModel base : https://sg-model-store.s3.ap-northeast-2.amazonaws.com/SDXL/base/sd_xl_base_1.0.safetensors Using seed: 42889 Prompt: A bohemian-style female travel blogger with sun-kissed skin and messy beach waves 0%| | 0/28 [00:00<?, ?it/s] 4%|▎ | 1/28 [00:00<00:08, 3.29it/s] 11%|█ | 3/28 [00:00<00:03, 6.97it/s] 14%|█▍ | 4/28 [00:00<00:03, 7.40it/s] 18%|█▊ | 5/28 [00:00<00:02, 7.69it/s] 21%|██▏ | 6/28 [00:00<00:02, 7.90it/s] 25%|██▌ | 7/28 [00:00<00:02, 8.04it/s] 29%|██▊ | 8/28 [00:01<00:02, 8.15it/s] 32%|███▏ | 9/28 [00:01<00:02, 8.22it/s] 36%|███▌ | 10/28 [00:01<00:02, 8.26it/s] 39%|███▉ | 11/28 [00:01<00:02, 8.29it/s] 43%|████▎ | 12/28 [00:01<00:01, 8.31it/s] 46%|████▋ | 13/28 [00:01<00:01, 8.33it/s] 50%|█████ | 14/28 [00:01<00:01, 8.33it/s] 54%|█████▎ | 15/28 [00:01<00:01, 8.33it/s] 57%|█████▋ | 16/28 [00:02<00:01, 8.33it/s] 61%|██████ | 17/28 [00:02<00:01, 8.33it/s] 64%|██████▍ | 18/28 [00:02<00:01, 8.33it/s] 68%|██████▊ | 19/28 [00:02<00:01, 8.33it/s] 71%|███████▏ | 20/28 [00:02<00:00, 8.34it/s] 75%|███████▌ | 21/28 [00:02<00:00, 8.33it/s] 79%|███████▊ | 22/28 [00:02<00:00, 8.33it/s] 82%|████████▏ | 23/28 [00:02<00:00, 8.34it/s] 86%|████████▌ | 24/28 [00:02<00:00, 8.34it/s] 89%|████████▉ | 25/28 [00:03<00:00, 8.34it/s] 93%|█████████▎| 26/28 [00:03<00:00, 8.33it/s] 96%|█████████▋| 27/28 [00:03<00:00, 8.32it/s] 100%|██████████| 28/28 [00:03<00:00, 8.31it/s] 100%|██████████| 28/28 [00:03<00:00, 8.06it/s] GPU 0: NVIDIA L40S Memory Usage: 12141.0MB / 46068.0MB GPU Utilization: 99.0%
Want to make some of these yourself?
Run this model