Generate images in one second on your Mac using a latent consistency model

Latent consistency models (LCMs) are based on Stable Diffusion, but they can generate images much faster, needing only 4 to 8 steps for a good image (compared to 25 to 50 steps). By running an LCM on your M1 or M2 Mac you can generate 512x512 images at a rate of one per second.

Simian Luo et al released the first Stable Diffusion distilled model. It’s distilled from the Dreamshaper fine-tune by incorporating classifier-free guidance into the model’s input. Only one model has been distilled so far, but more will be released. Stable Diffusion 2.1 and SDXL are being worked on by the paper authors.

You can run the first latent consistency model in the cloud on Replicate, but it’s also possible to run it locally. As well as generating predictions, you can hack on it, modify it, and build new things.

We’ve written this guide to help you get started.

Prerequisites

You’ll need:

  • a Mac with an M1 or M2 chip
  • 16GB RAM or more
  • macOS 12.3 or higher
  • Python 3.10 or above

We’ve found that an M1 Max or M2 with 32GB RAM can generate images in 1 second. An M1 Pro with 16GB RAM can generate images in 2 to 4 seconds. Please share your benchmarks with us on our Github repository.

Set up Python

You need Python 3.10 or above. Run python -V to see what Python version you have installed:

$ python3 -V
Python 3.10.6

If it’s 3.10 or above, like here, you’re good to go! Skip on over to the next step.

Otherwise, you’ll need to install Python 3.10. The easiest way to do that is with Homebrew. First, install Homebrew if you haven’t already.

Then, install the latest version of Python:

brew update
brew install python

Now if you run python3 -V you should have 3.10 or above. You might need to reopen your console to make it work.

Clone the repository and install the dependencies

Run this to clone the LCM script from Github:

git clone https://github.com/replicate/latent-consistency-model.git
cd latent-consistency-model

Then, set up a virtualenv to install the dependencies:

python3 -m pip install virtualenv
python3 -m virtualenv venv

Activate the virtualenv:

source venv/bin/activate

(You’ll need to run this command again any time you want to run the script.)

Then, install the dependencies:

pip install -r requirements.txt

Run it!

Now, you can run your latent consistency model. The script will automatically download the SimianLuo/LCM_Dreamshaper_v7 (3.44 GB) and safety checker (1.22 GB) models from HuggingFace.

python main.py \
  "a beautiful apple floating in outer space, like a planet" \
  --steps 4 --width 512 --height 512

You’ll see an output like this:

Output image saved to: output/out-20231026-144506.png
Using seed: 48404
100%|███████████████████████████| 4/4 [00:00<00:00,  5.54it/s]

We’ve also added a --continous flag, so you can keep on generating image after image until your harddrive is full. Generations after the first one will run a bit faster too.

python main.py \
  "a beautiful apple floating in outer space, like a planet" \
  --steps 4 --width 512 --height 512 --continuous

That’s it!

Latent consistency model generation of a beautiful apple floating in outer space, like a planet

Next steps

Happy hacking!