ostris
/
flux-dev-lora-trainer
Fine-tune FLUX.1-dev using ai-toolkit
Blog post: Learn about training with Flux Read the blog
Create training
Trainings for this model run on Nvidia H100 GPU hardware, which costs $0.001528 per second. Upon creation, you will be redirected to the training detail page where you can monitor your training's progress, and eventually download the weights and run the trained model.
Note: versions of this model with fast booting use the hardware set by the base model they were trained from.
You can fine-tune FLUX.1 on Replicate by just uploading some images, either on the web or via an API.
- Select a model as your destination or create a new one by typing the name in the model selector field.
- Next, upload the zip file containing your training data as the input_images.
-
Set up the training parameters.
Learn more ↓
The trigger_word refers to the object, style or concept you are training on. Pick a string that isn’t a real word, like TOK or something related to what’s being trained, like CYBRPNK. The trigger word you specify will be associated with all images during training. Then when you run your fine-tuned model, you can include the trigger word in prompts to activate your concept.
For steps, a good starting point is 1000.
Leave the learning_rate, batch_size, and resolution at their default values. Leave autocaptioning enabled unless you want to provide your own captions.
If you want to save your model on Hugging Face, enter your Hugging Face token and set the repository ID.
- Once you’ve filled out the form, click “Create training” to begin the process of fine-tuning.