Llava 13B

llava

Llava 13B is a multimodal vision model that can understand images. Llava can take image as inputs and answer questions about them. You can also fine-tune Llava on your dataset.

With Replicate, you can run Llava in the cloud with one line of code.

Want to try out Llava without writing code? Check out our Llava model playground.

You can run Llava with our official JavaScript client:

npm install replicate

Set the REPLICATE_API_TOKEN environment variable:

export REPLICATE_API_TOKEN=<your-api-token>

Import and set up the client:

import Replicate from "replicate";

const replicate = new Replicate({
  auth: process.env.REPLICATE_API_TOKEN,
});

Run yorickvp/llava-13b using Replicate’s API:

const output = await replicate.run(
  "yorickvp/llava-13b:a0fdc44e4f2e1f20f2bb4e27846899953ac8e66c5886c5878fa1d6b73ce009e5",
  {
    input: {
		"image": "https://replicate.delivery/pbxt/JfvBi04QfleIeJ3ASiBEMbJvhTQKWKLjKaajEbuhO1Y0wPHd/view.jpg",
		"top_p": 1,
		"prompt": "Are you allowed to swim here?",
		"max_tokens": 1024,
		"temperature": 0.2
    }
  }
);
console.log(output);

Note that Llava takes an image as input. You can provide URLs and or base 64 strings here as values for the image.

To learn more, take a look at the guide on getting started with Node.js.

You can run Llava with our official Python client:

pip install replicate

Set the REPLICATE_API_TOKEN environment variable:

export REPLICATE_API_TOKEN=<your-api-token>

Run yorickvp/llava-13b using Replicate’s API:

import replicate

output = replicate.run(
    "yorickvp/llava-13b:a0fdc44e4f2e1f20f2bb4e27846899953ac8e66c5886c5878fa1d6b73ce009e5",
    input={
		"image": "https://replicate.delivery/pbxt/JfvBi04QfleIeJ3ASiBEMbJvhTQKWKLjKaajEbuhO1Y0wPHd/view.jpg",
		"top_p": 1,
		"prompt": "Are you allowed to swim here?",
		"max_tokens": 1024,
		"temperature": 0.2
    }
)
print(output)

Note that Llava takes an image as input. You can provide URLs and or base 64 strings here as values for the image.

To learn more, take a look at the guide on getting started with Python.

You can call the HTTP API directly with tools like cURL:

Set the REPLICATE_API_TOKEN environment variable:

export REPLICATE_API_TOKEN=<your-api-token>

Run yorickvp/llava-13b using Replicate’s API:

curl -s -X POST \
  -H "Authorization: Token $REPLICATE_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d $'{
    "version": "e272157381e2a3bf12df3a8edd1f38d1dbd736bbb7437277c8b34175f8fce358",
    "input": {
		"image": "https://replicate.delivery/pbxt/JfvBi04QfleIeJ3ASiBEMbJvhTQKWKLjKaajEbuhO1Y0wPHd/view.jpg",
		"top_p": 1,
		"prompt": "Are you allowed to swim here?",
		"max_tokens": 1024,
		"temperature": 0.2
    }
  }' \
  https://api.replicate.com/v1/models/yorickvp/llava-13b/predictions

Note that Llava takes an image as input. You can provide URLs and or base 64 strings here as values for the image.

To learn more, take a look at Replicate’s HTTP API reference docs.

You can also run Llava using other Replicate client libraries for Golang, Swift, Elixir, and others