arielreplicate / deoldify_image

Add colours to old images

  • Public
  • 418.9K runs
  • T4
  • GitHub
  • License

Input

input_image
*file

Path to an image

*string

Which model to use: Artistic has more vibrant color but may leave important parts of the image gray.Stable is better for nature scenery and is less prone to leaving gray human parts

integer

The default value of 35 has been carefully chosen and should work -ok- for most scenarios (but probably won't be the -best-). This determines resolution at which the color portion of the image is rendered. Lower resolution will render faster, and colors also tend to look more vibrant. Older and lower quality images in particular will generally benefit by lowering the render factor. Higher render factors are often better for higher quality images, but the colors may get slightly washed out.

Default: 35

Output

output
Generated in

This example was created by a different version, arielreplicate/deoldify_image:376c74a2.

Run time and cost

This model costs approximately $0.023 to run on Replicate, or 43 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 101 seconds. The predict time for this model varies significantly based on the inputs.

Readme

DeOldify

The easiest way to colorize images. DeOldify Image Colorization on DeepAI

The most advanced version of DeOldify image colorization is available here, exclusively. Try a few images for free! MyHeritage In Color

Get more updates on Twitter.

About DeOldify

Simply put, the mission of this project is to colorize and restore old images and film footage. We’ll get into the details in a bit, but first let’s see some pretty pictures and videos!

Something to keep in mind- historical accuracy remains a huge challenge!

About the demo

The model have two available models:

  • Artistic: This model achieves the highest quality results in image coloration, in terms of interesting details and vibrance. The most notable drawback however is that it’s a bit of a pain to fiddle around with to get the best results (you have to adjust the rendering resolution or render_factor to achieve this). Additionally, the model does not do as well as stable in a few key common scenarios- nature scenes and portraits. The model uses a resnet34 backbone on a UNet with an emphasis on depth of layers on the decoder side. This model was trained with 5 critic pretrain/GAN cycle repeats via NoGAN, in addition to the initial generator/critic pretrain/GAN NoGAN training, at 192px. This adds up to a total of 32% of Imagenet data trained once (12.5 hours of direct GAN training).

  • Stable: This model achieves the best results with landscapes and portraits. Notably, it produces less “zombies”- where faces or limbs stay gray rather than being colored in properly. It generally has less weird miscolorations than artistic, but it’s also less colorful in general. This model uses a resnet101 backbone on a UNet with an emphasis on width of layers on the decoder side. This model was trained with 3 critic pretrain/GAN cycle repeats via NoGAN, in addition to the initial generator/critic pretrain/GAN NoGAN training, at 192px. This adds up to a total of 7% of Imagenet data trained once (3 hours of direct GAN training).

License

All code in this repository is under the MIT license as specified by the LICENSE file.