arielreplicate / deoldify_video

Add colours to old video footage.

  • Public
  • 4.9K runs
  • T4
  • GitHub
  • License

Input

*file

Path to a video

integer

The default value of 35 has been carefully chosen and should work -ok- for most scenarios (but probably won't be the -best-). This determines resolution at which the color portion of the image is rendered. Lower resolution will render faster, and colors also tend to look more vibrant. Older and lower quality images in particular will generally benefit by lowering the render factor. Higher render factors are often better for higher quality images, but the colors may get slightly washed out.

Default: 21

Output

Generated in

This example was created by a different version, arielreplicate/deoldify_video:2569e3e7.

Run time and cost

This model costs approximately $0.086 to run on Replicate, or 11 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 7 minutes. The predict time for this model varies significantly based on the inputs.

Readme

DeOldify

The easiest way to colorize images. DeOldify Image Colorization on DeepAI

The most advanced version of DeOldify image colorization is available here, exclusively. Try a few images for free! MyHeritage In Color

Get more updates on Twitter.

About DeOldify

Simply put, the mission of this project is to colorize and restore old images and film footage. We’ll get into the details in a bit, but first let’s see some pretty pictures and videos!

something to keep in mind- historical accuracy remains a huge challenge!

About the model

The video model is optimized for smooth, consistent and flicker-free video. This would definitely be the least colorful of the three models, but it’s honestly not too far off from “stable”. The model is the same as “stable” in terms of architecture, but differs in training. It’s trained for a mere 2.2% of Imagenet data once at 192px, using only the initial generator/critic pretrain/GAN NoGAN training (1 hour of direct GAN training).

License

All code in this repository is under the MIT license as specified by the LICENSE file.