laion-ai / laionide-v3

GLIDE finetuned on LAION5B, then more on curated datasets.

  • Public
  • 62K runs
  • T4
  • GitHub
  • Paper
  • License

Input

*string
Shift + Return to add a new line

Text prompt to use.

integer
(minimum: 1, maximum: 6)

Batch size. Number of generations to predict

Default: 3

integer

Must be multiple of 8. Going above 64 is not recommended. Actual image will be 4x larger.

Default: 64

integer

Must be multiple of 8. Going above 64 is not recommended. Actual image will be 4x larger.

Default: 64

boolean

Performs prompt-aware upsampling by 4x base resolution

Default: true

number

Classifier-free guidance scale. Higher values move further away from unconditional outputs. Lower values move closer to unconditional outputs. Negative values guide towards semantically opposite classes. 4-16 is a reasonable range.

Default: 4

string

Upsample temperature. Consider lowering to ~0.997 for blurry images with fewer artifacts.

Default: "0.998"

string

Number of timesteps to use for base model. Going above 50 has diminishing returns.

Default: "40"

string

Number of timesteps to use for base model. Going above 40 has diminishing returns.

Default: "17"

integer

Seed for reproducibility

Default: 0

Output

file
Generated in

This example was created by a different version, laion-ai/laionide-v3:db44d812.

Run time and cost

This model runs on Nvidia T4 GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

Laionide (version 3)

Direct comparison to OpenAI’s model using COCO captions

Shout out to stability.ai for donating the compute to laion needed for this to be possible.

Files: - laionide-v3-base.pt

Inference: - replicate - colab - locally

Results: - comparison to openai W&B report

Notes: - You can use laionide-v2-sr.pt to upscale the outputs from laionide-v3-base.pt. - There are watermarks in some outputs. You can try to prompt engineer this away, but it isn’t always possible. royalty free seems to work well.

Training details: - finetuned laionide-v2-base.pt for 9 epochs on a subset of CC12M (~1.5 million pairs), COCO (~100K pairs), virtual genome (~100K pairs), and open images localized annotations (~800K pairs). - 20% of unconditional/empty token, per the paper.