lucataco/clip-interrogator

CLIP Interrogator (for faster inference)

  • Public
  • 96.6K runs

Run clip-interrogator with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
image
string
Input image
clip_model_name
string (enum)
ViT-L-14/openai

Options:

ViT-L-14/openai, ViT-H-14/laion2b_s32b_b79k, ViT-bigG-14/laion2b_s39b_b160k

Choose ViT-L for Stable Diffusion 1, ViT-H for Stable Diffusion 2, or ViT-bigG for Stable Diffusion XL.
mode
string (enum)
best

Options:

best, classic, fast, negative

Prompt mode (best takes 10-20 seconds, fast takes 1-2 seconds).

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'title': 'Output', 'type': 'string'}
Example API response
View prediction
('painting of a turtle swimming in the ocean with a blue sky in the '
 'background, illustrative art, turtle, michael angelo inspired, world-bearing '
 'turtle, highly detailed illustration.”, 4k artwork, realistic illustration, '
 'highly detailed digital painting, vibrant digital painting, [ 4 k digital '
 'art, 4k art, hypperrealistic illustration, high detail illustration, vibrant '
 'realistic')