You're looking at a specific version of this model. Jump to the model overview.
fermatresearch /instant-paint:f081f007
Input
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run fermatresearch/instant-paint using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"fermatresearch/instant-paint:f081f0078a406007f41cff740e4012678ca75163bd42d95f61c67fbafaeb1a9a",
{
input: {
prompt: "An astronaut riding a rainbow unicorn, cinematic, dramatic",
scheduler: "LCM",
lora_scale: 0.6,
num_outputs: 1,
batched_prompt: false,
guidance_scale: 2,
apply_watermark: true,
condition_scale: 0.5,
negative_prompt: "",
prompt_strength: 0.8,
num_inference_steps: 6
}
}
);
console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run fermatresearch/instant-paint using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"fermatresearch/instant-paint:f081f0078a406007f41cff740e4012678ca75163bd42d95f61c67fbafaeb1a9a",
input={
"prompt": "An astronaut riding a rainbow unicorn, cinematic, dramatic",
"scheduler": "LCM",
"lora_scale": 0.6,
"num_outputs": 1,
"batched_prompt": False,
"guidance_scale": 2,
"apply_watermark": True,
"condition_scale": 0.5,
"negative_prompt": "",
"prompt_strength": 0.8,
"num_inference_steps": 6
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run fermatresearch/instant-paint using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "f081f0078a406007f41cff740e4012678ca75163bd42d95f61c67fbafaeb1a9a",
"input": {
"prompt": "An astronaut riding a rainbow unicorn, cinematic, dramatic",
"scheduler": "LCM",
"lora_scale": 0.6,
"num_outputs": 1,
"batched_prompt": false,
"guidance_scale": 2,
"apply_watermark": true,
"condition_scale": 0.5,
"negative_prompt": "",
"prompt_strength": 0.8,
"num_inference_steps": 6
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
brew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run fermatresearch/instant-paint using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/fermatresearch/instant-paint@sha256:f081f0078a406007f41cff740e4012678ca75163bd42d95f61c67fbafaeb1a9a \
-i 'prompt="An astronaut riding a rainbow unicorn, cinematic, dramatic"' \
-i 'scheduler="LCM"' \
-i 'lora_scale=0.6' \
-i 'num_outputs=1' \
-i 'batched_prompt=false' \
-i 'guidance_scale=2' \
-i 'apply_watermark=true' \
-i 'condition_scale=0.5' \
-i 'negative_prompt=""' \
-i 'prompt_strength=0.8' \
-i 'num_inference_steps=6'
To learn more, take a look at the Cog documentation.
Pull and run fermatresearch/instant-paint using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/fermatresearch/instant-paint@sha256:f081f0078a406007f41cff740e4012678ca75163bd42d95f61c67fbafaeb1a9a
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "prompt": "An astronaut riding a rainbow unicorn, cinematic, dramatic", "scheduler": "LCM", "lora_scale": 0.6, "num_outputs": 1, "batched_prompt": false, "guidance_scale": 2, "apply_watermark": true, "condition_scale": 0.5, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 6 } }' \ http://localhost:5000/predictions
Add a payment method to run this model.
Each run costs approximately $0.038. Alternatively, try out our featured models for free.
By signing in, you agree to our
terms of service and privacy policy
Output
No output yet! Press "Submit" to start a prediction.