SDXL on a A40
Input prompt
Default: "An astronaut riding a rainbow unicorn"
Input Negative Prompt
Default: ""
Input image for img2img or inpaint mode
Input mask for inpaint mode. Black areas will be preserved, white areas will be inpainted.
Width of output image
Default: 1024
Height of output image
Number of images to output.
Default: 1
scheduler
Default: "K_EULER"
Number of denoising steps
Default: 50
Scale for classifier-free guidance
Default: 7.5
Prompt strength when using img2img / inpaint. 1.0 corresponds to full destruction of information in image
Default: 0.8
Random seed. Leave blank to randomize the seed
Which refine style to use
Default: "no_refiner"
For expert_ensemble_refiner, the fraction of noise to use
For base_image_refiner, the number of steps to refine, defaults to num_inference_steps
Applies a watermark to enable determining if an image is generated in downstream applications. If you have other provisions for generating or deploying images safely, you can use this to disable watermarking.
Default: true
LoRA additive scale. Only applicable on trained models.
Default: 0.6
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run charlesmccarthy/sdxl using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "charlesmccarthy/sdxl:4c7300e6b45a6dacc2fbbeaa0d2e17624231fa02d998923e94a462423ed12ba5", { input: { width: 1024, height: 1024, prompt: "An astronaut riding a rainbow unicorn", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
import replicate
output = replicate.run( "charlesmccarthy/sdxl:4c7300e6b45a6dacc2fbbeaa0d2e17624231fa02d998923e94a462423ed12ba5", input={ "width": 1024, "height": 1024, "prompt": "An astronaut riding a rainbow unicorn", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4c7300e6b45a6dacc2fbbeaa0d2e17624231fa02d998923e94a462423ed12ba5", "input": { "width": 1024, "height": 1024, "prompt": "An astronaut riding a rainbow unicorn", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
No output yet! Press "Submit" to start a prediction.
This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.
SDXL on a a40
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
Choose a file from your machine
Hint: you can also drag files onto the input