Input prompt, text of what you want to generate.
Input negative prompt, text of what you don't want to generate.
Width of the output image.
Default: 1024
Height of the output image.
Number of output images.
Default: 1
Number of denoising steps.
Default: 4
Stochastic parameter to control the randomness.
Default: 0
Scale for classifier-free guidance.
Random seed. Leave blank to randomize the seed.
Number of the layers to skip in CLIP.
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run jyoung105/slam using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "jyoung105/slam:1197108fbd7ca3fd9a52a718c6958b869adc666f24647081d594f0c00af078cc", { input: { eta: 0, seed: 1234, steps: 4, width: 1024, height: 1024, prompt: "A man with hoodie on, illustration", clip_skip: 0, num_images: 1, guidance_scale: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
import replicate
output = replicate.run( "jyoung105/slam:1197108fbd7ca3fd9a52a718c6958b869adc666f24647081d594f0c00af078cc", input={ "eta": 0, "seed": 1234, "steps": 4, "width": 1024, "height": 1024, "prompt": "A man with hoodie on, illustration", "clip_skip": 0, "num_images": 1, "guidance_scale": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "jyoung105/slam:1197108fbd7ca3fd9a52a718c6958b869adc666f24647081d594f0c00af078cc", "input": { "eta": 0, "seed": 1234, "steps": 4, "width": 1024, "height": 1024, "prompt": "A man with hoodie on, illustration", "clip_skip": 0, "num_images": 1, "guidance_scale": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
{ "completed_at": "2024-11-22T13:23:10.119012Z", "created_at": "2024-11-22T13:22:48.969000Z", "data_removed": false, "error": null, "id": "5wxhxp0w15rma0ckag59hwkxaw", "input": { "eta": 0, "seed": 1234, "steps": 4, "width": 1024, "height": 1024, "prompt": "A man with hoodie on, illustration", "clip_skip": 0, "num_images": 1, "guidance_scale": 1 }, "logs": "[Debug] DEVICE: cuda\n[Debug] DTYPE: torch.float16\nSetup completed in 0.00 seconds.\n[~] Generating images...\n[Debug] Prompt: A man with hoodie on, illustration, best quality, high detail, sharp focus\n[Debug] Seed: 1234\n 0%| | 0/4 [00:00<?, ?it/s]\n 25%|██▌ | 1/4 [00:00<00:00, 5.48it/s]\n 75%|███████▌ | 3/4 [00:00<00:00, 10.81it/s]\n100%|██████████| 4/4 [00:00<00:00, 10.89it/s]\nImage generation completed in 1.19 seconds.\n[~] GPU: NVIDIA L40S\n[~] Memory: 11.04 GiB / 44.99 GiB\n[~] Generation time: 1.19 seconds", "metrics": { "predict_time": 1.544502077, "total_time": 21.150012 }, "output": [ "https://replicate.delivery/xezq/e9IdbIjQLm2ICC7wljoQDsagycArgYlZEnf8VGQyIzne9GnnA/out_0.png" ], "started_at": "2024-11-22T13:23:08.574510Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-n63jegwqr2ey5t45ijxaxovmh5ga76iyrl2x6ekwbvkfcalcynsa", "get": "https://api.replicate.com/v1/predictions/5wxhxp0w15rma0ckag59hwkxaw", "cancel": "https://api.replicate.com/v1/predictions/5wxhxp0w15rma0ckag59hwkxaw/cancel" }, "version": "1197108fbd7ca3fd9a52a718c6958b869adc666f24647081d594f0c00af078cc" }
[Debug] DEVICE: cuda [Debug] DTYPE: torch.float16 Setup completed in 0.00 seconds. [~] Generating images... [Debug] Prompt: A man with hoodie on, illustration, best quality, high detail, sharp focus [Debug] Seed: 1234 0%| | 0/4 [00:00<?, ?it/s] 25%|██▌ | 1/4 [00:00<00:00, 5.48it/s] 75%|███████▌ | 3/4 [00:00<00:00, 10.81it/s] 100%|██████████| 4/4 [00:00<00:00, 10.89it/s] Image generation completed in 1.19 seconds. [~] GPU: NVIDIA L40S [~] Memory: 11.04 GiB / 44.99 GiB [~] Generation time: 1.19 seconds
View more examples
This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.
AuraFlow: Fully open-sourced flow-based text-to-image generation model
CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion
Improved Distribution Matching Distillation for Fast Image Synthesis
Flash Diffusion: Accelerating Any Conditional Diffusion Model for Few Steps Image Generation
Locality-enhanced Projector for Multimodal LLM
Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis
a family of multimodal small language models
Free Lunch towards Style-Preserving in Text-to-Image Generation by InstantX team
Free Lunch towards Style-Preserving in Text-to-Image Generation by InstantX team, with ControlNet
Kolors: Effective Training of Diffusion Model for Photorealistic Text-to-Image Synthesis
Latent Consistency Models: Synthesizing High-Resolution Images with Few-step Inference
SDXL-Lightning: Progressive Adversarial Diffusion Distillation
Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis
Tiny vision language model
Phased Consistency Model
PeRFlow: Piecewise Rectified Flow as Universal Plug-and-Play Accelerator
Playground v2.0: A diffusion-based text-to-image generation model trained from scratch by the research team at Playground
Playground v2.5: Three Insights towards Enhancing Aesthetic Quality in Text-to-Image Generation
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
This model runs on L40S hardware which costs $0.000975 per second. View more.