laion-ai / conditioned-prior

Generate a CLIP image embedding from text.

  • Public
  • 372 runs
  • GitHub
  • License

Run time and cost

This model costs approximately $0.028 to run on Replicate, or 35 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 125 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Conditioned Prior (WIP)

Note: this image is likely to change over the coming days. If you choose to use it via API, be sure to use a pinned SHA.

Weights and code by @nousr

Predict a CLIP image embedding from its text embedding using a diffusion prior.

This code is part of an effort to replicate the models laid out in Hierarchical Text-Conditional Image Generation with CLIP Latents.

Requirements

Quick start

cog predict r8.im/laion-ai/conditioned-prior \
    -i prompt="..." \
    -i candidates=2 \
    -i cond_scale=1.0

Intended use

Anytime you need a CLIP image embed but only have a text description. For instance:

  • Use as input to models that accept CLIP embeds such as CLIP-guided VQGAN, diffusion to improve generations.

  • Use to improve performance on lookup tasks

Special Thanks

  • LAION for support, resources, and community

  • Stability AI for compute which makes these models possible

  • lucidrains for spearheading the open-source replication of DALLE 2

Caveats and recommendations

Just to avoid any confusion, this research is a recreation of (one part of) OpenAI’s DALLE2 paper. It is not, “DALLE2”, the product/service from OpenAI you may have seen on the web.

Contribute

git clone https://github.com/laion-ai/conditoned-prior.git && cd conditioned-prior

Build the docker image from scratch

Then, run:

cog build -t "my-custom-conditioned-prior"

Local prediction flask endpoint

docker run -d -p 5000:5000 --gpus=all 'my-custom-conditioned-prior'

A POST route /predictions will now trigger the model to be run. Weights are only loaded into GPU memory once upon running docker run, making repeated API calls faster.

curl http://localhost:5000/predictions -X POST -H "Content-Type: application/json" \
  -d '{"input": {
    "prompt": "...",
    "candidates": "2",
    "cond_scale": "1.0"
  }}'

Push a fork to your own Replicate account

First, edit the image property in cog.yaml

# ...
image: "" # TODO put your own url here after creating a model on Replicate.
build:
  gpu: true
  python_version: "3.8"
# ...

If you need to change the Replicate demo uploaded to replicate.com/laion-ai/conditioned-prior, you will need to be invited to be part of the laion-ai org on Replicate. Reach out to @afiaka87, @robvanvolt, @christophschuhmann, or @rom1504 if you need to.