Cache images with CloudflareBuild a website with Next.jsBuild a Discord bot with PythonBuild an app with SwiftUIPush your own modelPush a Diffusers modelPush a Transformers modelPush a model using GitHub ActionsDeploy a custom modelGet a GPU machine
Home / Guides / Comfyui

Run ComfyUI with an API


You can run ComfyUI workflows on Replicate, which means you can run them with an API too. Take your custom ComfyUI workflow to production.

You can use our official Python, Node.js, Swift, Elixir and Go clients.

We recommend you follow these steps:

  1. Get your workflow running on Replicate with the fofr/any-comfyui-workflow model (read our instructions and see what’s supported)
  2. Use the Replicate API to run the workflow
  3. Write code to customise the JSON you pass to the model (for example, to change prompts)
  4. Integrate the API into your app or website

Get your API token

You’ll need to sign up for Replicate, then you can find your API token on your account page.

Run your workflow with Python

In this example we’ll run the default ComfyUI workflow, a simple text to image flow.

Install Replicate’s Python client library:

pip install replicate

Set the REPLICATE_API_TOKEN environment variable:

export REPLICATE_API_TOKEN=r8-*********************************

Import the client and run the workflow:

import json
import replicate
 
output = replicate.run(
    "fofr/any-comfyui-workflow:latest",
    input={
        "workflow_json": json.dumps({
            "3": {
                "inputs": {
                    "seed": 156680208700286,
                    "steps": 20,
                    "cfg": 8,
                    "sampler_name": "euler",
                    "scheduler": "normal",
                    "denoise": 1,
                    "model": ["4", 0],
                    "positive": ["6", 0],
                    "negative": ["7", 0],
                    "latent_image": ["5", 0]
                },
                "class_type": "KSampler",
                "_meta": {"title": "KSampler"}
            },
            "4": {
                "inputs": {
                    "ckpt_name": "Realistic_Vision_V6.0_NV_B1_fp16.safetensors"
                },
                "class_type": "CheckpointLoaderSimple",
                "_meta": {"title": "Load Checkpoint"}
            },
            "5": {
                "inputs": {
                    "width": 512,
                    "height": 512,
                    "batch_size": 1
                },
                "class_type": "EmptyLatentImage",
                "_meta": {"title": "Empty Latent Image"}
            },
            "6": {
                "inputs": {
                    "text": "beautiful scenery nature glass bottle landscape, purple galaxy bottle,",
                    "clip": ["4", 1]
                },
                "class_type": "CLIPTextEncode",
                "_meta": {"title": "CLIP Text Encode (Prompt)"}
            },
            "7": {
                "inputs": {
                    "text": "text, watermark",
                    "clip": ["4", 1]
                },
                "class_type": "CLIPTextEncode",
                "_meta": {"title": "CLIP Text Encode (Prompt)"}
            },
            "8": {
                "inputs": {
                    "samples": ["3", 0],
                    "vae": ["4", 2]
                },
                "class_type": "VAEDecode",
                "_meta": {"title": "VAE Decode"}
            },
            "9": {
                "inputs": {
                    "filename_prefix": "ComfyUI",
                    "images": ["8", 0]
                },
                "class_type": "SaveImage",
                "_meta": {"title": "Save Image"}
            }
        }),
        "randomise_seeds": True,
        "return_temp_files": False
    }
)
 
# Save all generated images
for i, file_output in enumerate(output):
    with open(f'output_{i}.png', 'wb') as f:
        f.write(file_output.read())

Customise your workflow

You’ll want to customise your workflow, to send in different prompts and other options. You can do this by changing the JSON you pass to the model.

For example, you could change the checkpoint and prompt like this:

import json
import replicate
 
def load_workflow_from_file(file_path):
    with open(file_path, 'r') as file:
        return json.load(file)
 
def update_checkpoint_name(workflow, new_name):
    workflow["4"]["inputs"]["ckpt_name"] = new_name
 
def update_prompt(workflow, new_prompt):
    workflow["6"]["inputs"]["text"] = new_prompt
 
workflow = load_workflow_from_file('workflow.json')
update_checkpoint_name(workflow, "sd_xl_base_1.0.safetensors")
update_prompt(workflow, "a photo of a comfy sofa")
 
output = replicate.run(
    "fofr/any-comfyui-workflow:latest",
    input={
        "workflow_json": json.dumps(workflow),
        "randomise_seeds": True,
        "return_temp_files": False
    }
)
print(output)

Run your workflow with JavaScript

import Replicate from "replicate";
 
const replicate = new Replicate({
  auth: process.env.REPLICATE_API_TOKEN,
});
 
const workflowJson = {
  "3": {
    "inputs": {
      "seed": 156680208700286,
      "steps": 20,
      "cfg": 8,
      "sampler_name": "euler",
      "scheduler": "normal",
      "denoise": 1,
      "model": ["4", 0],
      "positive": ["6", 0],
      "negative": ["7", 0],
      "latent_image": ["5", 0]
    },
    "class_type": "KSampler",
    "_meta": {
      "title": "KSampler"
    }
  },
  "4": {
    "inputs": {
      "ckpt_name": "Realistic_Vision_V6.0_NV_B1_fp16.safetensors"
    },
    "class_type": "CheckpointLoaderSimple",
    "_meta": {
      "title": "Load Checkpoint"
    }
  },
  "5": {
    "inputs": {
      "width": 512,
      "height": 512,
      "batch_size": 1
    },
    "class_type": "EmptyLatentImage",
    "_meta": {
      "title": "Empty Latent Image"
    }
  },
  "6": {
    "inputs": {
      "text": "beautiful scenery nature glass bottle landscape, purple galaxy bottle,",
      "clip": ["4", 1]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "CLIP Text Encode (Prompt)"
    }
  },
  "7": {
    "inputs": {
      "text": "text, watermark",
      "clip": ["4", 1]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "CLIP Text Encode (Prompt)"
    }
  },
  "8": {
    "inputs": {
      "samples": ["3", 0],
      "vae": ["4", 2]
    },
    "class_type": "VAEDecode",
    "_meta": {
      "title": "VAE Decode"
    }
  },
  "9": {
    "inputs": {
      "filename_prefix": "ComfyUI",
      "images": ["8", 0]
    },
    "class_type": "SaveImage",
    "_meta": {
      "title": "Save Image"
    }
  }
};
 
const output = await replicate.run(
  "fofr/any-comfyui-workflow:latest",
  {
    input: {
      workflow_json: JSON.stringify(workflowJson),
      randomise_seeds: true,
      return_temp_files: false
    }
  }
);
console.log(output);