phospho-app / gr00t-policy

Fine-tune GR00T-N1 from Nvidia on a LeRobot dataset

  • Public
  • 261 runs
  • A100 (80GB)
  • GitHub
  • Weights
  • Paper
  • License

Input

*string
Shift + Return to add a new line

Hugging Face dataset ID to train on, LeRobot format > v2.0 expected, i.e. 'LegrandFrederic/dual-setup'

*secret

A secret has its value redacted after being sent to the model.

Hugging Face API token, needed to download your dataset and upload your model. Please use a fine grained token with Repository write access.

Including wandb_api_key and 6 more...
secret

A secret has its value redacted after being sent to the model.

Weights & Biases API key (optional, to track the online training), find yours here: https://wandb.ai/authorize

string
Shift + Return to add a new line

Hugging Face model name to upload the trained model to (optional). If not provided, a random name will be generated.

Default: ""

integer
(minimum: 1, maximum: 80)

Batch size for training

Default: 64

integer
(minimum: 1, maximum: 50)

Number of epochs to train for

Default: 20

number
(minimum: 0.00001, maximum: 0.01)

Learning rate for training

Default: 0.0002

secret

A secret has its value redacted after being sent to the model.

Modal token ID, if you don't know what this is, leave it empty

secret

A secret has its value redacted after being sent to the model.

Modal token secret, if you don't know what this is, leave it empty

Output

Rendering markdown...

Generated in

This output was created using a different version of the model, phospho-app/gr00t-policy:ad66b987.

Run time and cost

This model runs on Nvidia A100 (80GB) GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

This pipeline is designed to train a gr00t model from Nvidia on a LeRobot Dataset.

This run will push a model to Hugging Face for you to access. You will have the model id when the run completes.

Note that we expect your dataset to have 1 or 2 cameras, we also expect your robot to be 6 DoF, this will work with a regular so100.

If you are training specific configurations, this pipeline might fail.

With default parameters, training takes up to 3 hours (which costs roughly $10). Decrease the number of epochs for faster and cheaper training runs.

Reach out and join our Discord for more AI robotics: https://discord.com/invite/m8wzBGQA55