cjwbw / hard-prompts-made-easy

Gradient-Based Discrete Optimization for Prompt Tuning and Discovery

  • Public
  • 650 runs
  • A100 (80GB)
  • GitHub
  • Paper
  • License

Input

file

Input image.

string
Shift + Return to add a new line

Alternatively, you can provide a url link for an image. Ignored when image is uploaded.

string
Shift + Return to add a new line

Optional input prompt. By default, a prompt will be learnt by the model for the input image and will be used for generation, but you can also provide your own customised prompt.

integer
(minimum: 1, maximum: 500)

Number of denoising steps

Default: 25

number
(minimum: 1, maximum: 20)

Scale for classifier-free guidance

Default: 9

string

Choose a scheduler.

Default: "DPMSolverMultistep"

Output

best_prompt

fulham children weymouth seaside gita octane equestrian artforsale artists surya pino victorian impressionism romantic impressionist westend

original_image

original_image

generated_images

generated_images
Generated in

Run time and cost

This model costs approximately $0.48 to run on Replicate, or 2 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 6 minutes. The predict time for this model varies significantly based on the inputs.

Readme

Hard Prompts Made Easy: Discrete Prompt Tuning for Language Models

This code is the official implementation of Hard Prompts Made Easy.

If you have any questions, feel free to email Yuxin (ywen@umd.edu).

About

From a given image, we first optimize a hard prompt using the PEZ algorithm and CLIP encoders. Then, we take the optimized prompts and feed them into Stable Diffusion to generate new images. The name PEZ (hard Prompts made EaZy) was inspired from the PEZ candy dispenser.