Build a website with Next.jsBuild a Discord bot with PythonBuild an app with SwiftUIPush your own modelPush a Diffusers modelPush a Transformers modelPush a model using GitHub ActionsDeploy a custom modelFine-tune a language modelGet a GPU machine
Home / Guides / Upscaling images

Fixing faces with GFPGAN and Codeformer


GFPGAN and Codeformer are two models that can fix faces in images.

They are both fast and can be run in the cloud with an API. They are particularly useful as models to run against images generated by other AI models, especially older ones.

GFPGAN

TencentArc’s GFPGAN has long been the go to model for fixing faces in images. Whether that’s correcting them while upscaling, or fixing the mistakes in faces generated by AI.

It is very good at:

  • upscaling low resolution faces, such as those in old photos
  • fixing early AI’s mistakes in faces, especially eyes
  • removing noise from an image

It does not:

  • work well with high resolution faces, it tends to remove details
  • remove all compression artefacts
  • fix scratches or other damage

Use it in modern workflows alongside other upscalers. Fix faces with GFPGAN, then upscale with another model.

Example face fixes

In this example a Midjourney image from 2022 is fixed using GFPGAN in 2.6s.

See how well the eyes are fixed, but also note how some skin blemishes are removed. The sharpening of the soft focus is also noticeable and undesirable.

In another example we can see how GFPGAN fixes the face in an old victorian photo. The face looks really good and much of the identity is preserved. However there are still JPEG artefacts and picture damage.

Run in the cloud with an API

We recommend using GFPGAN via the Real-ESRGAN model on Replicate. Turn on the face_enhance option to enable GFPGAN.

Read about running Real-ESRGAN + GFPGAN with an API.

In this comparison, we compare GFPGAN and Real-ESRGAN:

Codeformer

Codeformer by sczhou is another choice for fixing badly generated AI faces.

Unlike GFPGAN, it will typically leave alone any part of the image that is not a face (you can optionally enhance the background with Real-ESRGAN).

It also takes a more heavy-handed approach to fixing faces. This means it can fix the very worst of AI faces, but when fixes need to be subtle it can degrade likeness.

It is very good at:

  • fixing really bad AI mistakes in faces
  • upscaling low resolution faces, such as those in old photos

It does not work well with:

  • high resolution faces
  • faces with lots of compression artefacts
  • subtle fixes where likeness needs preserving

In these cases it can get confused and return distorted and broken results.

Example face fixes

In this example, using the same Midjourney image as before, Codeformer fixes the face in 3s.

Notice that the edges of the image are unchanged. The eyes are fixed and the face is improved, albeit they look a little different. Like GFPGAN the skin blemishes have also been removed.

In another example we can see how Codeformer fixes the face in an old victorian photo. The face looks really good and much of the identity is preserved. However there are still JPEG artefacts and picture damage.

GFPGAN vs Codeformer

GFPGAN and Codeformer are both good at fixing faces.

If you want to maintain likeness, use GFPGAN. If the face you need to fix is really bad, try Codeformer. But otherwise they are very similar.

And the Victorian example: