Build a website with Next.jsBuild a Discord bot with PythonBuild an app with SwiftUIPush your own modelPush a Diffusers modelPush a Transformers modelPush a model using GitHub ActionsDeploy a custom modelFine-tune a language modelGet a GPU machine
Home / Guides / Comfyui

Run your ComfyUI workflow on Replicate


You can run ComfyUI workflows directly on Replicate using the fofr/any-comfyui-workflow model.

It works by using a ComfyUI JSON blob. You send us your workflow as a JSON blob and we’ll generate your outputs. You can also upload inputs or use URLs in your JSON.

How to use the Replicate model

Get your API JSON

You’ll need the API version of your ComfyUI workflow. This is different to the commonly shared JSON version, it does not included visual information about nodes, etc.

To get your API JSON:

  1. Turn on the "Enable Dev mode Options" from the ComfyUI settings (via the settings icon)
  2. Load your workflow into ComfyUI
  3. Export your API JSON using the "Save (API format)" button

Gather your input files

If your model takes inputs, like images for img2img or controlnet, you have 3 options:

  1. Use a URL
  2. Upload a single input
  3. Upload a zip file or tar file of your inputs

Using URLs as inputs

If you’re using URLs, you should modify your API JSON file to point at a URL:

- "image": "/your-path-to/image.jpg",
+ "image": "https://example.com/image.jpg",

Uploading a single input

You can also give a single input file when running the model. If this is an image or video, we’ll put it directly into the input directory, as input.[extension] – for example input.jpg.

You can then reference this in your workflow using the filename:

- "image": "/your-path-to/my-picture-001.jpg",
+ "image": "image.jpg",

Uploading a zip file or tar file of your inputs

If your model is more complex and requires multiple inputs, you can upload a zip file or tar file of all of them.

These will be downloaded and extracted to the input directory. You can then reference them in your workflow based on their relative paths.

So a zip file containing:

- my_img.png
- references/my_reference_01.jpg
- references/my_reference_02.jpg

Might be used in the workflow like:

"image": "my_img.png",
...
"directory": "references",

We'll always validate that your inputs exist before running your workflow.

Run your workflow

With all your inputs ready, you can now run your workflow.

There’s a couple of extra options you can use:

  • return_temp_files – Some workflows save temporary files, for example pre-processed controlnet images. Use this option to also return these files.
  • randomise_seeds – Usually you want to randomise your seeds, so we’ve made this easy for you. Set this option to true to randomise all your seeds.

An example output

Input
workflow_json
{ ... "seed": 156680208700286, "steps": 20, "cfg": 8, "sampler_name": "euler", "scheduler": "normal", "denoise": 1, "positive": ["beautiful scenery nature glass bottle landscape, purple galaxy bottle", 0], "negative": ["text, watermark", 0], "latent_image": [512, 512, 1, 0] ... }
Output

Supported weights

We support the most popular model weights, including:

  • SDXL
  • RealVisXL 3.0
  • Realistic Vision 5.1 and 6.0
  • DreamShaper 6
  • TurboVisionXL
  • Stable Video Diffusion
  • AnimateDiff
  • LCM Dreamshaper
  • LCM LoRAs

Also included are all the popular controlnets and preprocessors. We recommend using the comfyui_controlnet_aux custom node for preprocessors. And ComfyUI Advanced ControlNet is included if you really know what you’re doing.

View the complete list of supported weights or request a weight by raising an issue.

If your exact model isn’t supported, you can also try switching to the closest match. Just update your JSON to use a different model filename.

Custom nodes

Again, we’ve tried to include the most popular custom nodes.

Some of the custom nodes included are:

View the complete list of supported custom nodes. You can also raise an issue to request more custom nodes, or use the Github repo as a template to roll your own.