You're looking at a specific version of this model. Jump to the model overview.
Output image size
Default: "256x256"
decoder type
Default: "sd-vae-ft-mse"
Number of denoising steps
Default: 250
Default: 4
Which ImageNet class to generate
Default: "centipede"
How many outputs
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run arielreplicate/scalable_diffusion_with_transformers using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "arielreplicate/scalable_diffusion_with_transformers:089c3506e8fb6cdc7f7b0165cac893f1ae03e044fbff88ef7164d9f0b0dead85", { input: { cfg_scale: 4, class_name: "centipede", VAE_Decoder: "sd-vae-ft-mse", num_outputs: 4, DiT_resolution: "256x256", num_sampling_steps: 250 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
import replicate
output = replicate.run( "arielreplicate/scalable_diffusion_with_transformers:089c3506e8fb6cdc7f7b0165cac893f1ae03e044fbff88ef7164d9f0b0dead85", input={ "cfg_scale": 4, "class_name": "centipede", "VAE_Decoder": "sd-vae-ft-mse", "num_outputs": 4, "DiT_resolution": "256x256", "num_sampling_steps": 250 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "089c3506e8fb6cdc7f7b0165cac893f1ae03e044fbff88ef7164d9f0b0dead85", "input": { "cfg_scale": 4, "class_name": "centipede", "VAE_Decoder": "sd-vae-ft-mse", "num_outputs": 4, "DiT_resolution": "256x256", "num_sampling_steps": 250 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/arielreplicate/scalable_diffusion_with_transformers@sha256:089c3506e8fb6cdc7f7b0165cac893f1ae03e044fbff88ef7164d9f0b0dead85 \ -i 'cfg_scale=4' \ -i 'class_name="centipede"' \ -i 'VAE_Decoder="sd-vae-ft-mse"' \ -i 'num_outputs=4' \ -i 'DiT_resolution="256x256"' \ -i 'num_sampling_steps=250'
To learn more, take a look at the Cog documentation.
docker run -d -p 5000:5000 --gpus=all r8.im/arielreplicate/scalable_diffusion_with_transformers@sha256:089c3506e8fb6cdc7f7b0165cac893f1ae03e044fbff88ef7164d9f0b0dead85 curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "cfg_scale": 4, "class_name": "centipede", "VAE_Decoder": "sd-vae-ft-mse", "num_outputs": 4, "DiT_resolution": "256x256", "num_sampling_steps": 250 } }' \ http://localhost:5000/predictions
docker run -d -p 5000:5000 --gpus=all r8.im/arielreplicate/scalable_diffusion_with_transformers@sha256:089c3506e8fb6cdc7f7b0165cac893f1ae03e044fbff88ef7164d9f0b0dead85
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "cfg_scale": 4, "class_name": "centipede", "VAE_Decoder": "sd-vae-ft-mse", "num_outputs": 4, "DiT_resolution": "256x256", "num_sampling_steps": 250 } }' \ http://localhost:5000/predictions
No output yet! Press "Submit" to start a prediction.