You're looking at a specific version of this model. Jump to the model overview.
visoar /product-photo:f100c24b
Input
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run visoar/product-photo using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"visoar/product-photo:f100c24bfef70884b45018b98c939b7ca4cf658b3256d5b7f9dcd78d41fe13d5",
{
input: {
pixel: "512 * 512",
scale: 3,
image_num: 1,
manual_seed: -1,
product_size: "0.6 * width",
guidance_scale: 7.5,
negative_prompt: "low quality, out of frame, illustration, 3d, sepia, painting, cartoons, sketch, watermark, text, Logo, advertisement",
num_inference_steps: 20
}
}
);
console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run visoar/product-photo using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"visoar/product-photo:f100c24bfef70884b45018b98c939b7ca4cf658b3256d5b7f9dcd78d41fe13d5",
input={
"pixel": "512 * 512",
"scale": 3,
"image_num": 1,
"manual_seed": -1,
"product_size": "0.6 * width",
"guidance_scale": 7.5,
"negative_prompt": "low quality, out of frame, illustration, 3d, sepia, painting, cartoons, sketch, watermark, text, Logo, advertisement",
"num_inference_steps": 20
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run visoar/product-photo using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "f100c24bfef70884b45018b98c939b7ca4cf658b3256d5b7f9dcd78d41fe13d5",
"input": {
"pixel": "512 * 512",
"scale": 3,
"image_num": 1,
"manual_seed": -1,
"product_size": "0.6 * width",
"guidance_scale": 7.5,
"negative_prompt": "low quality, out of frame, illustration, 3d, sepia, painting, cartoons, sketch, watermark, text, Logo, advertisement",
"num_inference_steps": 20
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
brew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run visoar/product-photo using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/visoar/product-photo@sha256:f100c24bfef70884b45018b98c939b7ca4cf658b3256d5b7f9dcd78d41fe13d5 \
-i 'pixel="512 * 512"' \
-i 'scale=3' \
-i 'image_num=1' \
-i 'manual_seed=-1' \
-i 'product_size="0.6 * width"' \
-i 'guidance_scale=7.5' \
-i 'negative_prompt="low quality, out of frame, illustration, 3d, sepia, painting, cartoons, sketch, watermark, text, Logo, advertisement"' \
-i 'num_inference_steps=20'
To learn more, take a look at the Cog documentation.
Pull and run visoar/product-photo using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/visoar/product-photo@sha256:f100c24bfef70884b45018b98c939b7ca4cf658b3256d5b7f9dcd78d41fe13d5
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "pixel": "512 * 512", "scale": 3, "image_num": 1, "manual_seed": -1, "product_size": "0.6 * width", "guidance_scale": 7.5, "negative_prompt": "low quality, out of frame, illustration, 3d, sepia, painting, cartoons, sketch, watermark, text, Logo, advertisement", "num_inference_steps": 20 } }' \ http://localhost:5000/predictions
Add a payment method to run this model.
Each run costs approximately $0.033. Alternatively, try out our featured models for free.
By signing in, you agree to our
terms of service and privacy policy
Output
No output yet! Press "Submit" to start a prediction.