Run your ComfyUI workflow on Replicate

You can run ComfyUI workflows directly on Replicate using the fofr/any-comfyui-workflow model.

It works by using a ComfyUI JSON blob. You send us your workflow as a JSON blob and we’ll generate your outputs. You can also upload inputs or use URLs in your JSON.

You’ll need the API version of your ComfyUI workflow. This is different to the commonly shared JSON version, it does not included visual information about nodes, etc.

To get your API JSON:

  1. Turn on the “Enable Dev mode Options” from the ComfyUI settings (via the settings icon)
  2. Load your workflow into ComfyUI
  3. Export your API JSON using the “Save (API format)” button

If your model takes inputs, like images for img2img or controlnet, you have 3 options:

  1. Use a URL
  2. Upload a single input
  3. Upload a zip file or tar file of your inputs

If you’re using URLs, you should modify your API JSON file to point at a URL:

- "image": "/your-path-to/image.jpg",
+ "image": "https://example.com/image.jpg",

You can also give a single input file when running the model. If this is an image or video, we’ll put it directly into the input directory, as input.[extension] – for example input.jpg.

You can then reference this in your workflow using the filename:

- "image": "/your-path-to/my-picture-001.jpg",
+ "image": "image.jpg",

If your model is more complex and requires multiple inputs, you can upload a zip file or tar file of all of them.

These will be downloaded and extracted to the input directory. You can then reference them in your workflow based on their relative paths.

So a zip file containing:

- my_img.png
- references/my_reference_01.jpg
- references/my_reference_02.jpg

Might be used in the workflow like:

"image": "my_img.png",
...
"directory": "references",

We’ll always validate that your inputs exist before running your workflow.

With all your inputs ready, you can now run your workflow.

There’s a couple of extra options you can use:

  • return_temp_files – Some workflows save temporary files, for example pre-processed controlnet images. Use this option to also return these files.
  • randomise_seeds – Usually you want to randomise your seeds, so we’ve made this easy for you. Set this option to true to randomise all your seeds.

Input
workflow_json
{ ... "seed": 156680208700286, "steps": 20, "cfg": 8, "sampler_name": "euler", "scheduler": "normal", "denoise": 1, "positive": ["beautiful scenery nature glass bottle landscape, purple galaxy bottle", 0], "negative": ["text, watermark", 0], "latent_image": [512, 512, 1, 0] ... }
Output

We support the most popular model weights, including:

  • SDXL
  • RealVisXL 3.0
  • Realistic Vision 5.1 and 6.0
  • DreamShaper 6
  • TurboVisionXL
  • Stable Video Diffusion
  • AnimateDiff
  • LCM Dreamshaper
  • LCM LoRAs

Also included are all the popular controlnets and preprocessors. We recommend using the comfyui_controlnet_aux custom node for preprocessors. And ComfyUI Advanced ControlNet is included if you really know what you’re doing.

View the complete list of supported weights or request a weight by raising an issue.

If your exact model isn’t supported, you can also try switching to the closest match. Just update your JSON to use a different model filename.

Again, we’ve tried to include the most popular custom nodes.

Some of the custom nodes included are:

View the complete list of supported custom nodes. You can also raise an issue to request more custom nodes, or use the Github repo as a template to roll your own.