lucataco / video-split

Video Preprocessing tool for LoRA Training

  • Public
  • 11 runs
  • GitHub
  • License

Run time and cost

This model runs on CPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

About

A video preprocessing tool designed specifically for preparing videos for LoRA fine-tuning. This tool processes videos into the format required to fine-tune, including options for length segmentation, resolution, and frame rate.

Input Video Requirements

  • Format: MP4 or MOV files
  • Length: Any length (will be split into segments)
  • Resolution: Any resolution (will be processed internally)
  • Quality: Clear, well-lit videos work best

Output Format

  • A zip file called processed_videos.zip
  • Output folder contains files in the format: segmentX.mp4
  • Empty captions will have the same name as the video segment. Ex: segment1.mp4 + segment1.txt

How to use

  • If you have a long video of the video effect you want to train with, use this model to split it up into the right segment length and resolution.
  • Then use a video caption model like nateraw/video-llava, ChatGPT or Gemini to pair each video segment with a txt caption file
  • Your zip of video segments and captions can then be used to train a video model with genmoai/mochi-1-lora-trainer

Example run

The featured example is a VHS effect by Luis Quintero. This model takes that video and segments it into 5 video segments of 2 seconds each at 848 × 480 resolution, returned in a single zip file

Train your own Mochi-1 LoRA

Train your own video LoRA with the model: genmoai/mochi-1-lora-trainer