You're looking at a specific version of this model. Jump to the model overview.

replicate /dreambooth:9e8f3f42

Input

*string
Shift + Return to add a new line

The prompt with identifier specifying the instance

*string
Shift + Return to add a new line

The prompt to specify images in the same class as provided instance images.

*file

A ZIP file containing the training data of instance images

file

A ZIP file containing the training data of class images. Images will be generated if you do not provide.

string
Shift + Return to add a new line

The prompt used to generate sample outputs to save.

string
Shift + Return to add a new line

The negative prompt used to generate sample outputs to save.

integer

The number of samples to save.

Default: 4

number

CFG for save sample.

Default: 7.5

integer

The number of inference steps for save sample.

Default: 50

boolean

Flag to pad tokens to length 77.

Default: false

boolean

Flag to add prior preservation loss.

Default: true

number

Weight of prior preservation loss.

Default: 1

integer

Minimal class images for prior preservation loss. If not have enough images, additional images will be sampled with class_prompt.

Default: 50

integer

A seed for reproducible training

Default: 1337

integer

The resolution for input images. All the images in the train/validation dataset will be resized to this resolution.

Default: 512

boolean

Whether to center crop images before resizing to resolution

Default: false

boolean

Whether to train the text encoder

Default: true

integer

Batch size (per device) for the training dataloader.

Default: 1

integer

Batch size (per device) for sampling images.

Default: 4

integer

Default: 1

integer

Total number of training steps to perform. If provided, overrides num_train_epochs.

Default: 500

integer

Number of updates steps to accumulate before performing a backward/update pass.

Default: 1

boolean

Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.

Default: true

number

Initial learning rate (after the potential warmup period) to use.

Default: 0.000001

boolean

Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.

Default: false

string

The scheduler type to use

Default: "constant"

integer

Number of steps for the warmup in the lr scheduler.

Default: 0

boolean

Whether or not to use 8-bit Adam from bitsandbytes.

Default: true

number

The beta1 parameter for the Adam optimizer.

Default: 0.9

number

The beta2 parameter for the Adam optimizer.

Default: 0.999

number

Weight decay to use

Default: 0.01

number

Epsilon value for the Adam optimizer

Default: 1e-8

number

Max gradient norm.

Default: 1

integer

Save weights every N steps.

Default: 10000

Output

No output yet! Press "Submit" to start a prediction.