Input prompt
Default: "An astronaut riding a rainbow unicorn"
Input Negative Prompt
Default: ""
Width of output image
Default: 1024
Height of output image
Number of images to output.
Default: 1
noise scheduler - only applies if LCM is disabled
Default: "K_EULER"
Number of denoising steps
Default: 4
Scale for classifier-free guidance
Default: 2
Random seed. Leave blank to randomize the seed
Which refine style to use
Default: "no_refiner"
For expert_ensemble_refiner, the fraction of noise to use
Default: 0.8
For base_image_refiner, the number of steps to refine, defaults to num_inference_steps
Applies a watermark to enable determining if an image is generated in downstream applications. If you have other provisions for generating or deploying images safely, you can use this to disable watermarking.
Default: true
This model’s safety checker can’t be disabled when running on the website. Learn more about platform safety on Replicate.
Disable safety checker for generated images. This feature is only available through the API. See [https://replicate.com/docs/how-does-replicate-work#safety](https://replicate.com/docs/how-does-replicate-work#safety)
Default: false
Scale for LCM, if 0, the DDIM scheduler is used
Scale for style LoRA
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run anotherjesse/sdxl-lcm-testing using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "anotherjesse/sdxl-lcm-testing:7050701ac789f535cd507847777404a4afaaae02827e96b78392847e71173a03", { input: { width: 1024, height: 1024, prompt: "frankenstein monster whippet", refine: "no_refiner", lcm_scale: 1, scheduler: "K_EULER", num_outputs: 1, style_scale: 0.8, guidance_scale: 2, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", num_inference_steps: 4 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
import replicate
output = replicate.run( "anotherjesse/sdxl-lcm-testing:7050701ac789f535cd507847777404a4afaaae02827e96b78392847e71173a03", input={ "width": 1024, "height": 1024, "prompt": "frankenstein monster whippet", "refine": "no_refiner", "lcm_scale": 1, "scheduler": "K_EULER", "num_outputs": 1, "style_scale": 0.8, "guidance_scale": 2, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "num_inference_steps": 4 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "7050701ac789f535cd507847777404a4afaaae02827e96b78392847e71173a03", "input": { "width": 1024, "height": 1024, "prompt": "frankenstein monster whippet", "refine": "no_refiner", "lcm_scale": 1, "scheduler": "K_EULER", "num_outputs": 1, "style_scale": 0.8, "guidance_scale": 2, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "num_inference_steps": 4 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/anotherjesse/sdxl-lcm-testing@sha256:7050701ac789f535cd507847777404a4afaaae02827e96b78392847e71173a03 \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="frankenstein monster whippet"' \ -i 'refine="no_refiner"' \ -i 'lcm_scale=1' \ -i 'scheduler="K_EULER"' \ -i 'num_outputs=1' \ -i 'style_scale=0.8' \ -i 'guidance_scale=2' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt=""' \ -i 'num_inference_steps=4'
To learn more, take a look at the Cog documentation.
docker run -d -p 5000:5000 --gpus=all r8.im/anotherjesse/sdxl-lcm-testing@sha256:7050701ac789f535cd507847777404a4afaaae02827e96b78392847e71173a03 curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "frankenstein monster whippet", "refine": "no_refiner", "lcm_scale": 1, "scheduler": "K_EULER", "num_outputs": 1, "style_scale": 0.8, "guidance_scale": 2, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "num_inference_steps": 4 } }' \ http://localhost:5000/predictions
docker run -d -p 5000:5000 --gpus=all r8.im/anotherjesse/sdxl-lcm-testing@sha256:7050701ac789f535cd507847777404a4afaaae02827e96b78392847e71173a03
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "frankenstein monster whippet", "refine": "no_refiner", "lcm_scale": 1, "scheduler": "K_EULER", "num_outputs": 1, "style_scale": 0.8, "guidance_scale": 2, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "num_inference_steps": 4 } }' \ http://localhost:5000/predictions
{ "completed_at": "2023-11-10T00:17:17.989526Z", "created_at": "2023-11-10T00:17:14.537987Z", "data_removed": false, "error": null, "id": "r5gvwflbxpiof2ri7aebxn7ohu", "input": { "width": 1024, "height": 1024, "prompt": "frankenstein monster whippet", "refine": "no_refiner", "num_outputs": 1, "guidance_scale": 2, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "num_inference_steps": 4 }, "logs": "Using seed: 29847\n 0%| | 0/4 [00:00<?, ?it/s]\n 25%|██▌ | 1/4 [00:00<00:01, 2.39it/s]\n 50%|█████ | 2/4 [00:00<00:00, 2.40it/s]\n 75%|███████▌ | 3/4 [00:01<00:00, 2.41it/s]\n100%|██████████| 4/4 [00:01<00:00, 2.41it/s]\n100%|██████████| 4/4 [00:01<00:00, 2.41it/s]", "metrics": { "predict_time": 3.468524, "total_time": 3.451539 }, "output": [ "https://replicate.delivery/pbxt/fjvdkTvNjERTJyYQINH87AAGclfgzx1k7W46iyy6RDMNiy2RA/out-0.png" ], "started_at": "2023-11-10T00:17:14.521002Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/r5gvwflbxpiof2ri7aebxn7ohu", "cancel": "https://api.replicate.com/v1/predictions/r5gvwflbxpiof2ri7aebxn7ohu/cancel" }, "version": "33df0b6c238991d17253389ce91235383307964209e7dd5b9ec8908cb918d5da" }
Using seed: 29847 0%| | 0/4 [00:00<?, ?it/s] 25%|██▌ | 1/4 [00:00<00:01, 2.39it/s] 50%|█████ | 2/4 [00:00<00:00, 2.40it/s] 75%|███████▌ | 3/4 [00:01<00:00, 2.41it/s] 100%|██████████| 4/4 [00:01<00:00, 2.41it/s] 100%|██████████| 4/4 [00:01<00:00, 2.41it/s]
This example was created by a different version, anotherjesse/sdxl-lcm-testing:33df0b6c.
View more examples
This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.