Color match and white balance fixes for images
The input image
The reference image. If not provided, only white balance fixes will be applied.
The method to use for color transfer
Default: "mkl"
Strength of the color transfer effect (0.0 to 1.0)
Default: 1
Apply automatic white balance to input image (before color transfer)
Default: false
Percentile for white balance calculation (0.0 to 100.0)
Default: 95
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/color-matcher using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const input = { method: "mkl", strength: 1, input_image: "https://replicate.delivery/pbxt/MtNw1hr0C3Ff4v0ixb7ldfySBGUsoszivzdoMNQYyRaECqDV/20250407_2359_Curious%20Expression_remix_01jr97ns5we8ebc2595ryj0xak.png", reference_image: "https://replicate.delivery/pbxt/MtNw1Safv0BWXmoNNObPAZdu9vMoaUZBWXlFPtDOpDgcMNFr/0_1.webp", fix_white_balance: false, white_balance_percentile: 95 }; const output = await replicate.run("fofr/color-matcher", { input }); // To access the file URL: console.log(output.url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
import replicate
output = replicate.run( "fofr/color-matcher", input={ "method": "mkl", "strength": 1, "input_image": "https://replicate.delivery/pbxt/MtNw1hr0C3Ff4v0ixb7ldfySBGUsoszivzdoMNQYyRaECqDV/20250407_2359_Curious%20Expression_remix_01jr97ns5we8ebc2595ryj0xak.png", "reference_image": "https://replicate.delivery/pbxt/MtNw1Safv0BWXmoNNObPAZdu9vMoaUZBWXlFPtDOpDgcMNFr/0_1.webp", "fix_white_balance": False, "white_balance_percentile": 95 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "input": { "method": "mkl", "strength": 1, "input_image": "https://replicate.delivery/pbxt/MtNw1hr0C3Ff4v0ixb7ldfySBGUsoszivzdoMNQYyRaECqDV/20250407_2359_Curious%20Expression_remix_01jr97ns5we8ebc2595ryj0xak.png", "reference_image": "https://replicate.delivery/pbxt/MtNw1Safv0BWXmoNNObPAZdu9vMoaUZBWXlFPtDOpDgcMNFr/0_1.webp", "fix_white_balance": false, "white_balance_percentile": 95 } }' \ https://api.replicate.com/v1/models/fofr/color-matcher/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
{ "completed_at": "2025-04-24T11:11:30Z", "created_at": "2025-04-24T11:11:29.421000Z", "data_removed": false, "error": "", "id": "b1raq6pwhnrm80cpcxz8zsf01r", "input": { "method": "mkl", "strength": 1, "input_image": "https://replicate.delivery/pbxt/MtNw1hr0C3Ff4v0ixb7ldfySBGUsoszivzdoMNQYyRaECqDV/20250407_2359_Curious%20Expression_remix_01jr97ns5we8ebc2595ryj0xak.png", "reference_image": "https://replicate.delivery/pbxt/MtNw1Safv0BWXmoNNObPAZdu9vMoaUZBWXlFPtDOpDgcMNFr/0_1.webp", "fix_white_balance": false, "white_balance_percentile": 95 }, "logs": "", "metrics": { "image_count": 1, "predict_time": 1.441450903, "total_time": 0.579 }, "output": "https://replicate.delivery/xezq/9FYGwweQbVWRCK8Eeg17ZocS7v5fWyCQre8Is2YTorFKmzXSB/output.png", "started_at": "2025-04-24T11:11:29Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-adsfd45xn47hpm7fyhwh2tavtmnfed4ichxrfs4l6usomqlmit5a", "get": "https://api.replicate.com/v1/predictions/b1raq6pwhnrm80cpcxz8zsf01r", "cancel": "https://api.replicate.com/v1/predictions/b1raq6pwhnrm80cpcxz8zsf01r/cancel" }, "version": "hidden" }
View more examples
This model doesn't have a readme.
Make pictures of an AI character named 0_1.webp
Run any ComfyUI workflow. Guide: https://github.com/replicate/cog-comfyui
Run any ComfyUI workflow on an A100. Guide: https://github.com/fofr/cog-comfyui
Create a waveform video from audio
Largest completely open sourced flow-based generation model that is capable of text-to-image generation
Uses 'Align your steps' for faster higher quality images
A wrapper model for captioning multiple images using GPT, Claude or Gemini, useful for lora training
Adapt any picture of a face into another image
A cinematic model fine-tuned on SDXL
Create images of a given character in different poses
Canny, soft edge, depth, lineart, segmentation, pose, etc
Fast and high quality lightning model, epiCRealismXL-Lightning Hades
Quickly edit the expression of a face
Turn a face into 3D, emoji, pixel art, video game, claymation or toy
Turn a face into a sticker
A flux dev lora fine-tuned on bad 2004 digital photography
A flux lora trained on a 1980s cyberpunk aesthetic
Flux lora, use "POWER_RANGERS" to trigger image generation
This model is warm. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
Choose a file from your machine
Hint: you can also drag files onto the input