web-cardinalblue / piccollage-bg-lora

  • Public
  • 476 runs
  • A100 (80GB)
  • GitHub

Input

pip install replicate
Set the REPLICATE_API_TOKEN environment variable:
export REPLICATE_API_TOKEN=<paste-your-token-here>

Find your API token in your account settings.

Import the client:
import replicate

Run web-cardinalblue/piccollage-bg-lora using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.

output = replicate.run(
    "web-cardinalblue/piccollage-bg-lora:88d86ab8b9c4472daf9581dd74966b500c477ed93d4e7842379fd7713b5ee9f4",
    input={
        "seamless": True,
        "batch_size": 8,
        "lora_model": "Vector Style, 77MB_Rank100_Steps4000",
        "num_images": 8,
        "negative_prompt": "weird, blurred, low-quality, clustered, ugly"
    }
)
print(output)

To learn more, take a look at the guide on getting started with Python.

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model costs approximately $0.0093 to run on Replicate, or 107 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 7 seconds.

Readme

This model generates a prompt in piccollage style

Model description

You can pick the LoRA model and it will attach the corresponding layers to the base model during inference time. It also automatically adds the following suffixes to the prompt you enter:

  • Vector Style, 77MB_Rank100_Steps4000: , background in sks style
  • Vector Style, 77MB_Rank100_Steps6000: , background in sks style
  • Vector Style, 7MB_Rank10_Steps4000: , piccollage style

This is because the larger models were trained with the prompt background in sks style while the smaller model was trained with the prompt piccollage style. But you don’t need to worry about it.