Example text input
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run replicate/hello-context using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "replicate/hello-context:1c9f10493d86b936364c6e410e6d0ff67f0fd8accd204e760756ef64aeb4e937", { input: {} } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
import replicate
output = replicate.run( "replicate/hello-context:1c9f10493d86b936364c6e410e6d0ff67f0fd8accd204e760756ef64aeb4e937", input={} ) print(output)
To learn more, take a look at the guide on getting started with Python.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "replicate/hello-context:1c9f10493d86b936364c6e410e6d0ff67f0fd8accd204e760756ef64aeb4e937", "input": {} }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
No output yet! Press "Submit" to start a prediction.
This model runs on CPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.
Classifies images with ResNet-50
Train your own custom Stable Diffusion model using a small set of images
A language model by Google for tasks like classification, summarization, and more
Transformers implementation of the LLaMA language model
A large language model by EleutherAI
This is a language model that can be used to obtain document embeddings suitable for downstream tasks like semantic search and clustering.
Train your own custom RVC model
Generates goo
Train subjects or styles faster than ever
Flux 2D Game Asset LoRA
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
This model runs on CPU hardware which costs $0.0001 per second. View more.