Table of contents
Llava 13B is a multimodal vision model that can understand images. Llava can take image as inputs and answer questions about them.
With Replicate, you can run Llava in the cloud with one line of code.
Want to try out Llava without writing code? Check out our Llava model playground.
You can run Llava with our official JavaScript client:
Set the REPLICATE_API_TOKEN
environment variable:
Import and set up the client:
Run yorickvp/llava-13b using Replicate’s API:
Note that Llava takes an image as input. You can provide URLs and or base 64 strings here as values for the image.
To learn more, take a look at the guide on getting started with Node.js.
You can run Llava with our official Python client:
Set the REPLICATE_API_TOKEN
environment variable:
Run yorickvp/llava-13b using Replicate’s API:
Note that Llava takes an image as input. You can provide URLs and or base 64 strings here as values for the image.
To learn more, take a look at the guide on getting started with Python.
You can call the HTTP API directly with tools like cURL:
Set the REPLICATE_API_TOKEN
environment variable:
Run yorickvp/llava-13b using Replicate’s API:
Note that Llava takes an image as input. You can provide URLs and or base 64 strings here as values for the image.
To learn more, take a look at Replicate’s HTTP API reference docs.
You can also run Llava using other Replicate client libraries for Golang, Swift, Elixir, and others