aramintak / linnea-flux-beta

An original character LoRA

  • Public
  • 411 runs
  • H100

Input

*string
Shift + Return to add a new line

Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.

file

Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.

file

Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.

string

Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode

Default: "1:1"

integer
(minimum: 256, maximum: 1440)

Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation

integer
(minimum: 256, maximum: 1440)

Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation

number
(minimum: 0, maximum: 1)

Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image

Default: 0.8

string

Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.

Default: "dev"

integer
(minimum: 1, maximum: 4)

Number of outputs to generate

Default: 1

integer
(minimum: 1, maximum: 50)

Number of denoising steps. More steps can give more detailed images, but take longer.

Default: 28

number
(minimum: 0, maximum: 10)

Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5

Default: 3

integer

Random seed. Set for reproducible generation

string

Format of the output images

Default: "webp"

integer
(minimum: 0, maximum: 100)

Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs

Default: 80

boolean

This model’s safety checker can’t be disabled when running on the website. Learn more about platform safety on Replicate.

Disable safety checker for generated images.

Default: false

boolean

Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16

Default: false

string

Approximate number of megapixels for generated image

Default: "1"

number
(minimum: -1, maximum: 3)

Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.

Default: 1

string
Shift + Return to add a new line

Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'

number
(minimum: -1, maximum: 3)

Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.

Default: 1

Output

output
Generated in

This output was created using a different version of the model, aramintak/linnea-flux-beta:7d86be5f.

Run time and cost

This model runs on Nvidia H100 GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

Model description

Meet Linnea!

This is my personal OC, Linnea, who is a big focus in my own art and illustration. The dataset is composed entirely of hand drawn illustrations and I'll be using her for some future projects. This is an early model, and I expect I'll work out some improved parameters in the near future so it should continue to get better.

I wanted to share her and will be writing some blogs/making videos in the near future on how to train a character like this. I also have no issue with people using her in their own experiments, particularly to test out mixing characters with styles for Flux Dev.

That being said, she is not For Commercial Use! This is only for fun and research. :) I would prefer not seeing her become someone's mascot. If you would like to collaborate on a project that features this character, you'll need to contact me and I might not agree to it anyway. :)

Note: The hair color still sometimes varies, it is easiest to fix if you use the prompt "teal hair".

Dataset example images: IMG_0093.PNG

IMG_0160.PNG

IMG_0208.PNG

Special Thanks!

I trained this model while I was testing out the Replicate training pipeline, big thanks to the Replicate team. :)

out-0 (3).png

Trigger words

You should use linnea teal hair to trigger the image generation.