Collections

Edit your videos

Frequently asked questions

Which models are the fastest?

For quick edits and smaller clips, luma/reframe-video is one of the fastest options—it can reformat short videos (up to 30 seconds) in 720p almost instantly.

lucataco/trim-video and lucataco/video-merge are also lightweight tools designed for snappy turnaround when cutting or combining short clips.

Which models offer the best balance of cost and quality?

If you want strong quality with minimal compute, luma/modify-video is a great middle ground. It supports style transfer and prompt-based edits without requiring long render times.

For workflows that combine enhancement and output-ready results, lucataco/video-utils provides versatile functions (trim, merge, reframe) in one package.

What works best for stylizing or transforming videos?

For creative transformations, luma/modify-video lets you apply visual style changes directly from a text prompt. You can make your clip look painted, cinematic, or stylized without manual editing.

If you want to go beyond visuals and add synchronized sound effects, try zsxkib/mmaudio—it generates contextual audio that matches motion and mood in the video.

What works best for reframing or resizing clips?

luma/reframe-video specializes in changing aspect ratios while keeping subjects centered. It’s ideal for adapting horizontal footage for vertical formats like TikTok or Reels.

It outputs in 720p and supports videos up to about 30 seconds long, making it well-suited for social content workflows.

What’s the difference between key subtypes or approaches in this collection?

There are three main editing categories:

Each category can be used separately or chained together for more complex edits.

What kinds of outputs can I expect from these models?

Most models return enhanced or edited MP4 videos, though some (like lucataco/extract-audio or lucataco/frame-extractor) produce separate audio files or image frames.

Visual edits preserve motion while changing tone, style, or composition; audio models create synchronized, AI-generated soundtracks.

How can I self-host or push a model to Replicate?

Many of these video-editing models are open source. You can fork one (for example, luma/modify-video) and customize it using Cog or Docker.

To publish your own model, define inputs and outputs in a replicate.yaml, push it to your account, and it will run on Replicate’s managed GPUs.

Can I use these models for commercial work?

Yes, many of the tools listed are licensed for commercial use, but always confirm on each model’s page.

If a model includes third-party data (like pretrained style references), check for attribution or redistribution requirements before using outputs in published media.

How do I use or run these models?

Upload your source video, choose the desired transformation or effect, and click Run.

For example, you can reframe a landscape clip into portrait format with luma/reframe-video, or apply a “cartoon” style prompt in luma/modify-video.

You can also chain tasks—extract frames, enhance visuals, and then merge the results into one final file.

What should I know before running a job in this collection?

  • Shorter clips (under 30 seconds) process faster and more reliably.
  • Keep input resolution moderate if you’re applying heavy style transfer.
  • For videos with dialogue or music, run lucataco/extract-audio first, edit visuals separately, and re-merge using lucataco/video-audio-merge.
  • Some models produce fixed-resolution outputs (e.g., 720p for luma/reframe-video), so check before scaling.

Any other collection-specific tips or considerations?