ZoeDepth: Combining relative and metric depth
{ "image": "https://replicate.delivery/pbxt/IPzzqLRb2x6XwGUK28l7dNTFO9MzQG1WmY2sdapZ2tnEdmMF/123.png", "model_type": "ZoeD_N" }
npm install replicate
import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run cjwbw/zoedepth using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "cjwbw/zoedepth:6375723d97400d3ac7b88e3022b738bf6f433ae165c4a2acd1955eaa6b8fcb62", { input: { image: "https://replicate.delivery/pbxt/IPzzqLRb2x6XwGUK28l7dNTFO9MzQG1WmY2sdapZ2tnEdmMF/123.png", model_type: "ZoeD_N" } } ); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
import replicate
output = replicate.run( "cjwbw/zoedepth:6375723d97400d3ac7b88e3022b738bf6f433ae165c4a2acd1955eaa6b8fcb62", input={ "image": "https://replicate.delivery/pbxt/IPzzqLRb2x6XwGUK28l7dNTFO9MzQG1WmY2sdapZ2tnEdmMF/123.png", "model_type": "ZoeD_N" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "cjwbw/zoedepth:6375723d97400d3ac7b88e3022b738bf6f433ae165c4a2acd1955eaa6b8fcb62", "input": { "image": "https://replicate.delivery/pbxt/IPzzqLRb2x6XwGUK28l7dNTFO9MzQG1WmY2sdapZ2tnEdmMF/123.png", "model_type": "ZoeD_N" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
{ "completed_at": "2023-03-05T00:01:22.263313Z", "created_at": "2023-03-05T00:01:13.904447Z", "data_removed": false, "error": null, "id": "d4rle6xixfdq5gt4kuhylilo6a", "input": { "image": "https://replicate.delivery/pbxt/IPzzqLRb2x6XwGUK28l7dNTFO9MzQG1WmY2sdapZ2tnEdmMF/123.png", "model_type": "ZoeD_N" }, "logs": null, "metrics": { "predict_time": 8.072233, "total_time": 8.358866 }, "output": "https://replicate.delivery/pbxt/Yiy3JvNLmMpkKZhuAamOPUjdFUYn5OIl0xPlu04aTfBpbMSIA/out.png", "started_at": "2023-03-05T00:01:14.191080Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/d4rle6xixfdq5gt4kuhylilo6a", "cancel": "https://api.replicate.com/v1/predictions/d4rle6xixfdq5gt4kuhylilo6a/cancel" }, "version": "6375723d97400d3ac7b88e3022b738bf6f433ae165c4a2acd1955eaa6b8fcb62" }
Want to make some of these yourself?
A linear estimator on top of clip to predict the aesthetic quality of pictures
Multilingual Stable Diffusion
a dreambooth model trained on a diverse set of analog photographs
Anime-themed text-to-image stable diffusion model
Audio-Driven Synthesis of Photorealistic Portrait Animations
high-quality, highly detailed anime style stable-diffusion
high-quality, highly detailed anime style stable-diffusion with better VAE
high-quality, highly detailed anime-style Stable Diffusion models
App Icons Generator V1 (DreamBooth Model)
Separate Anything You Describe
Real-Time High-Resolution Background Matting
Colorization using a Generative Color Prior for Natural Images
Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing
Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing with ControlNet
Language-Free Training of a Text-to-Image Generator with CLIP
Clip-Guided Diffusion Model for Image Generation
Generates pokemon sprites from prompt
openai/clip-vit-large-patch14 with Transformers
A Visual Language Model for GUI Agents
powerful open-source visual language model
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.