Chat with images
Vision models process and interpret visual information from images and videos. You can use vision models to answer questions about the content of an image, identify and locate objects, etc.
Here’s an example using the yorickvp/llava-13b vision model to generate recipe ideas from an image of your fridge:
And here’s how you can run the model from your JavaScript code:
import Replicate from "replicate";
const replicate = new Replicate();
const output = await replicate.run(
"yorickvp/llava-13b:01359160a4cff57c6b7d4dc625d0019d390c7c46f553714069f114b392f4a726",
{
input: {
image: "https://replicate.delivery/pbxt/KZOUXoMy3OxnyOeIA0LNzhtWDjBZLm9T6IPm5lbKcFT8lybo/fridge.png",
prompt: "Here's a photo of my fridge today. Please give me some simple recipe ideas based on its contents.",
}
}
);
console.log(output);
If you don’t need reasoning abilities and just want to get descriptions of images, check out our image captioning collection →
Featured models

openai / gpt-4o-mini
Low latency, low cost version of OpenAI's GPT-4o model
Updated 1 month ago

anthropic / claude-4-sonnet
Claude Sonnet 4 is a significant upgrade to 3.7, delivering superior coding and reasoning while responding more precisely to your instructions
Updated 3 months ago

yorickvp / llava-13b
Visual instruction tuning towards large language and vision models with GPT-4 level capabilities
Updated 1 year, 2 months ago
Recommended models

openai / gpt-4.1-mini
Fast, affordable version of GPT-4.1
Updated 1 day, 7 hours ago

openai / gpt-4o
OpenAI's high-intelligence chat model
Updated 1 week ago

lucataco / qwen2.5-omni-7b
Qwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner.
Updated 5 months, 1 week ago

anthropic / claude-3.7-sonnet
The most intelligent Claude model and the first hybrid reasoning model on the market (claude-3-7-sonnet-20250219)
Updated 6 months, 3 weeks ago

anthropic / claude-3.5-sonnet
Anthropic's most intelligent language model to date, with a 200K token context window and image understanding (claude-3-5-sonnet-20241022)
Updated 7 months ago

lucataco / qwen2-vl-7b-instruct
Latest model in the Qwen family for chatting with video and image models
Updated 8 months, 4 weeks ago

lucataco / ollama-llama3.2-vision-90b
Ollama Llama 3.2 Vision 90B
Updated 9 months ago

lucataco / ollama-llama3.2-vision-11b
Ollama Llama 3.2 Vision 11B
Updated 9 months ago

lucataco / moondream2
moondream2 is a small vision language model designed to run efficiently on edge devices
Updated 1 year, 1 month ago

yorickvp / llava-v1.6-vicuna-13b
LLaVA v1.6: Large Language and Vision Assistant (Vicuna-13B)
Updated 1 year, 7 months ago

yorickvp / llava-v1.6-mistral-7b
LLaVA v1.6: Large Language and Vision Assistant (Mistral-7B)
Updated 1 year, 7 months ago

zsxkib / uform-gen
🖼️ Super fast 1.5B Image Captioning/VQA Multimodal LLM (Image-to-Text) 🖋️
Updated 1 year, 7 months ago

adirik / kosmos-g
Kosmos-G: Generating Images in Context with Multimodal Large Language Models
Updated 1 year, 9 months ago

lucataco / bakllava
BakLLaVA-1 is a Mistral 7B base augmented with the LLaVA 1.5 architecture
Updated 1 year, 10 months ago

lucataco / qwen-vl-chat
A multimodal LLM-based AI assistant, which is trained with alignment techniques. Qwen-VL-Chat supports more flexible interaction, such as multi-round question answering, and creative capabilities.
Updated 1 year, 11 months ago

adirik / owlvit-base-patch32
Zero-shot / open vocabulary object detection
Updated 1 year, 11 months ago