Collections

Enhance videos

Frequently asked questions

Which models are the fastest?

For short clips or smaller videos, runwayml/upscale-v1 is one of the fastest options. It upscales up to 4× and supports outputs up to 4K for videos under 40 seconds.

lucataco/real-esrgan-video is also efficient for quick enhancement of MP4 files, making it practical for short- to medium-length projects.

Which models offer the best balance of cost and quality?

lucataco/real-esrgan-video delivers strong results for most footage without heavy artifacts. It’s a solid default for general-purpose enhancement.

If you want a polished, professional look, topazlabs/video-upscale offers premium-quality detail enhancement and stable results.

What works best for improving faces in videos?

For face-specific restoration, pbarker/gfpgan-video and zsxkib/stable-video-face-restoration are built to sharpen facial details and reduce compression artifacts.

You can chain a face-restoration pass with an upscaler like lucataco/real-esrgan-video to recover features and increase resolution in one workflow.

What works best for old or stylized footage?

For restoring vintage or black-and-white clips, arielreplicate/deoldify_video colorizes footage and improves tonal depth.

For anime and hand-drawn styles, tencentarc/animesr is tuned for clean lines and stable colors.

What’s the difference between key subtypes or approaches in this collection?

Video enhancement models generally fall into three groups:

Pick the group that matches your goal, or combine them for layered improvements.

What kinds of outputs can I expect from these models?

Most models output enhanced MP4 videos.

Upscalers produce higher-resolution versions (often up to 4×), restoration models return cleaner, sharper clips, and colorization or interpolation tools output versions with improved color or motion.

How can I self-host or push a model to Replicate?

Open tools like lucataco/real-esrgan-video or pbarker/gfpgan-video can be self-hosted with Cog or Docker.

To publish your own pipeline on Replicate, define inputs (for example, video_file, scale, mode) and outputs (for example, enhanced_video) in a replicate.yaml, then push it to your account.

Can I use these models for commercial work?

Yes, many models in this collection allow commercial use. Always check the License section on the model page to confirm terms for your specific project.

How do I use or run these models?

Upload your video on the model page, set parameters like scale factor or restoration strength, and click Run.

For a restoration workflow, process faces first (for example, pbarker/gfpgan-video), then upscale with lucataco/real-esrgan-video or topazlabs/video-upscale.

What should I know before running a job in this collection?

  • Keep initial tests short; longer, high-resolution videos require more compute and time.
  • High-quality source footage yields better results; heavy compression limits what enhancement can recover.
  • Some models only support MP4 input; convert if needed.
  • Frame interpolation works best on steady footage; test small segments before processing an entire video.

Any other collection-specific tips or considerations?