philz1337x/clip-interrogator

Faster! The CLIP Interrogator is a prompt engineering tool that combines OpenAI's CLIP and Salesforce's BLIP to optimize text prompts to match a given image. Use the resulting prompts with text-to-image models like Stable Diffusion to create cool art!

Public
132.4K runs

Run philz1337x/clip-interrogator with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
image
string
Input image
clip_model_name
None
ViT-L-14/openai
Choose ViT-L for Stable Diffusion 1, and ViT-H for Stable Diffusion 2
mode
None
best
Prompt mode (best takes 10-20 seconds, fast takes 1-2 seconds).

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "string",
  "title": "Output"
}