You're looking at a specific version of this model. Jump to the model overview.

genmoai /mochi-1-lora-trainer:170ea99f

Input

file

A zip file containing the video snippets that will be used for training. We recommend a minimum of 12 videos of only a few seconds each. If you include captions, include them as one .txt file per video, e.g. video-1.mp4 should have a caption file named video-1.txt.

boolean

Automatically trim and crop video inputs

Default: true

integer
(minimum: 10, maximum: 6000)

Number of training steps. Recommended range 500-4000

Default: 100

number

Learning rate, if you're new to training you probably don't need to change this.

Default: 0.0004

number
(minimum: 0.01, maximum: 1)

Caption dropout, if you're new to training you probably don't need to change this.

Default: 0.1

integer

Batch size, you can leave this as 1

Default: 1

string
Shift + Return to add a new line

Optimizer to use for training. Supports: adam, adamw.

Default: "adamw"

boolean

Compile the transformer

Default: false

integer
(minimum: 0, maximum: 100000)

Seed for reproducibility, you can leave this as 42

Default: 42

string
Shift + Return to add a new line

Hugging Face repository ID, if you'd like to upload the trained LoRA to Hugging Face. For example, lucataco/mochi-lora-vhs. If the given repo does not exist, a new public repo will be created.

secret

A secret has its value redacted after being sent to the model.

Hugging Face token, if you'd like to upload the trained LoRA to Hugging Face.

Output

Generated in