llsean / clip-interrogator-image-analysis

(Updated 1 year, 6 months ago)

  • Public
  • 14 runs
  • L40S
Iterate in playground

Input

pip install replicate
Set the REPLICATE_API_TOKEN environment variable:
export REPLICATE_API_TOKEN=<paste-your-token-here>

Find your API token in your account settings.

Import the client:
import replicate

Run llsean/clip-interrogator-image-analysis using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.

output = replicate.run(
    "llsean/clip-interrogator-image-analysis:1d78352129182842018d997191ef98abb36530e9ba41e28c804a07ac52919393",
    input={
        "width": 768,
        "height": 768,
        "prompt": "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k",
        "num_images": 1,
        "guidance_scale": 8,
        "archive_outputs": False,
        "prompt_strength": 0.8,
        "sizing_strategy": "width/height",
        "lcm_origin_steps": 50,
        "canny_low_threshold": 100,
        "num_inference_steps": 8,
        "canny_high_threshold": 200,
        "control_guidance_end": 1,
        "control_guidance_start": 0,
        "controlnet_conditioning_scale": 2
    }
)
print(output)

To learn more, take a look at the guide on getting started with Python.

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

This model doesn't have a readme.