Talk to images with Llava 13B
Table of contents

Llava 13B is a multimodal vision model that can understand images. Llava can take image as inputs and answer questions about them.
With Replicate, you can run Llava in the cloud with one line of code.
Run Llava in our Playground
Want to try out Llava without writing code? Check out our Llava model playground.
Run Llava with JavaScript
You can run Llava with our official JavaScript client:
npm install replicate
Set the REPLICATE_API_TOKEN
environment variable:
export REPLICATE_API_TOKEN=<your-api-token>
Import and set up the client:
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run yorickvp/llava-13b using Replicate’s API:
const output = await replicate.run(
"yorickvp/llava-13b:latest",
{
input: {
"image": "https://replicate.delivery/pbxt/JfvBi04QfleIeJ3ASiBEMbJvhTQKWKLjKaajEbuhO1Y0wPHd/view.jpg",
"top_p": 1,
"prompt": "Are you allowed to swim here?",
"max_tokens": 1024,
"temperature": 0.2
}
}
);
console.log(output);
Note that Llava takes an image as input. You can provide URLs and or base 64 strings here as values for the image.
To learn more, take a look at the guide on getting started with Node.js.
Run Llava with Python
You can run Llava with our official Python client:
pip install replicate
Set the REPLICATE_API_TOKEN
environment variable:
export REPLICATE_API_TOKEN=<your-api-token>
Run yorickvp/llava-13b using Replicate’s API:
import replicate
output = replicate.run(
"yorickvp/llava-13b:latest",
input={
"image": "https://replicate.delivery/pbxt/JfvBi04QfleIeJ3ASiBEMbJvhTQKWKLjKaajEbuhO1Y0wPHd/view.jpg",
"top_p": 1,
"prompt": "Are you allowed to swim here?",
"max_tokens": 1024,
"temperature": 0.2
}
)
print(output)
Note that Llava takes an image as input. You can provide URLs and or base 64 strings here as values for the image.
To learn more, take a look at the guide on getting started with Python.
Run Llava with cURL
You can call the HTTP API directly with tools like cURL:
Set the REPLICATE_API_TOKEN
environment variable:
export REPLICATE_API_TOKEN=<your-api-token>
Run yorickvp/llava-13b using Replicate’s API:
curl -s -X POST \
-H "Authorization: Token $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-d $'{
"version": "e272157381e2a3bf12df3a8edd1f38d1dbd736bbb7437277c8b34175f8fce358",
"input": {
"image": "https://replicate.delivery/pbxt/JfvBi04QfleIeJ3ASiBEMbJvhTQKWKLjKaajEbuhO1Y0wPHd/view.jpg",
"top_p": 1,
"prompt": "Are you allowed to swim here?",
"max_tokens": 1024,
"temperature": 0.2
}
}' \
https://api.replicate.com/v1/models/yorickvp/llava-13b/predictions
Note that Llava takes an image as input. You can provide URLs and or base 64 strings here as values for the image.
To learn more, take a look at Replicate’s HTTP API reference docs.
You can also run Llava using other Replicate client libraries for Golang, Swift, Elixir, and others