Collections

Enhance videos

Upscale, restore, extend, and improve your videos with AI.

Models we recommend

Best overall upscaler: Topaz Video Upscale

Topaz Video Upscale is the gold standard for video upscaling. It handles everything from old home videos to professional footage, with output up to 4K and frame rates up to 120fps. The same technology that powers Topaz Video AI on desktop, now available as an API.

Best for faces: Crystal Video Upscaler

Crystal Video Upscaler is optimized for videos with people — it preserves skin texture, facial identity, and natural details without the plastic look that other upscalers sometimes introduce. Great for interviews, testimonials, and portrait-style video.

Extend videos: Grok Imagine Video Extension

Grok Imagine Video Extension from xAI lets you extend any video by 2-10 seconds. Describe what happens next and it generates a seamless continuation from the last frame — maintaining visual style, motion, and consistency. Great for lengthening clips, adding narrative beats, or iterating on longer sequences.

Colorize old footage: DeOldify Video

DeOldify Video adds realistic color to black-and-white video. It's optimized for temporal stability, so colors stay consistent across frames without flickering.

Restore faces in video: GFPGAN Video

GFPGAN Video enhances facial details frame by frame. Useful for improving old footage or cleaning up compressed video where faces have lost detail.

Budget upscaler: Real-ESRGAN Video

Real-ESRGAN Video supports up to 4K output with specialized models for general video and anime content. A reliable, cost-effective option for batch processing.

Smooth motion: FILM Frame Interpolation

FILM Frame Interpolation increases frame rate by generating intermediate frames. Makes choppy footage look smooth without artifacts, even for scenes with large motion.

Tips

  • For the best results, upscale and restore in stages: fix faces first, then upscale resolution.
  • Start with short test clips before processing long videos — upscaling is compute-intensive.
  • Higher resolution output costs more and takes longer. If you just need 1080p, don't upscale to 4K.
  • For anime content, use Real-ESRGAN Video's anime-specific model for cleaner lines and colors.

Frequently asked questions

Which models are the fastest?

For short clips or smaller videos, runwayml/upscale-v1 is one of the fastest options. It upscales up to 4× and supports outputs up to 4K for videos under 40 seconds.

lucataco/real-esrgan-video is also efficient for quick enhancement of MP4 files, making it practical for short- to medium-length projects.

Which models offer the best balance of cost and quality?

lucataco/real-esrgan-video delivers strong results for most footage without heavy artifacts. It’s a solid default for general-purpose enhancement.

If you want a polished, professional look, topazlabs/video-upscale offers premium-quality detail enhancement and stable results.

What works best for improving faces in videos?

For face-specific restoration, pbarker/gfpgan-video and zsxkib/stable-video-face-restoration are built to sharpen facial details and reduce compression artifacts.

You can chain a face-restoration pass with an upscaler like lucataco/real-esrgan-video to recover features and increase resolution in one workflow.

What works best for old or stylized footage?

For restoring vintage or black-and-white clips, arielreplicate/deoldify_video colorizes footage and improves tonal depth.

For anime and hand-drawn styles, tencentarc/animesr is tuned for clean lines and stable colors.

What’s the difference between key subtypes or approaches in this collection?

Video enhancement models generally fall into three groups:

Pick the group that matches your goal, or combine them for layered improvements.

What kinds of outputs can I expect from these models?

Most models output enhanced MP4 videos.

Upscalers produce higher-resolution versions (often up to 4×), restoration models return cleaner, sharper clips, and colorization or interpolation tools output versions with improved color or motion.

How can I self-host or push a model to Replicate?

Open tools like lucataco/real-esrgan-video or pbarker/gfpgan-video can be self-hosted with Cog or Docker.

To publish your own pipeline on Replicate, define inputs (for example, video_file, scale, mode) and outputs (for example, enhanced_video) in a replicate.yaml, then push it to your account.

Can I use these models for commercial work?

Yes, many models in this collection allow commercial use. Always check the License section on the model page to confirm terms for your specific project.

How do I use or run these models?

Upload your video on the model page, set parameters like scale factor or restoration strength, and click Run.

For a restoration workflow, process faces first (for example, pbarker/gfpgan-video), then upscale with lucataco/real-esrgan-video or topazlabs/video-upscale.

What should I know before running a job in this collection?

  • Keep initial tests short; longer, high-resolution videos require more compute and time.
  • High-quality source footage yields better results; heavy compression limits what enhancement can recover.
  • Some models only support MP4 input; convert if needed.
  • Frame interpolation works best on steady footage; test small segments before processing an entire video.

Any other collection-specific tips or considerations?