You're looking at a specific version of this model. Jump to the model overview.

genmoai /mochi-1-lora-trainer:9f7c6284

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
input_videos
string
A zip file containing the video snippets that will be used for training. We recommend a minimum of 12 videos of only a few seconds each. If you include captions, include them as one .txt file per video, e.g. video-1.mp4 should have a caption file named video-1.txt.
trim_and_crop
boolean
True
Automatically trim and crop video inputs
steps
integer
100

Min: 10

Max: 6000

Number of training steps. Recommended range 500-4000
learning_rate
number
0.0004
Learning rate, if you're new to training you probably don't need to change this.
caption_dropout
number
0.1

Min: 0.01

Max: 1

Caption dropout, if you're new to training you probably don't need to change this.
batch_size
integer
1
Batch size, you can leave this as 1
optimizer
string
adamw
Optimizer to use for training. Supports: adam, adamw.
compile_dit
boolean
False
Compile the transformer
seed
integer
42

Max: 100000

Seed for reproducibility, you can leave this as 42
hf_repo_id
string
Hugging Face repository ID, if you'd like to upload the trained LoRA to Hugging Face. For example, lucataco/mochi-lora-vhs. If the given repo does not exist, a new public repo will be created.
hf_token
string
Hugging Face token, if you'd like to upload the trained LoRA to Hugging Face.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'properties': {'weights': {'format': 'uri',
                            'title': 'Weights',
                            'type': 'string'}},
 'required': ['weights'],
 'title': 'Output',
 'type': 'object'}