We’ve been talking a lot about how to run and fine-tune Llama 2 on Replicate. But you can also run Llama locally on your M1/M2 Mac, on Windows, on Linux, or even your phone. The cool thing about running Llama 2 locally is that you don’t even need an internet connection.
Here’s an example using a locally-running Llama 2 to whip up a website about why llamas are cool:
It’s only been a couple days since Llama 2 was released, but there are already a handful of techniques for running it locally. In this blog post we’ll cover three open-source tools you can use to run Llama 2 on your own devices:
Llama.cpp is a port of Llama in C/C++, which makes it possible to run Llama 2 locally using 4-bit integer quantization on Macs. However, Llama.cpp also has support for Linux/Windows.
Here’s a one-liner you can use to install it on your M1/M2 Mac:
Here’s what that one-liner does:
Here's a one-liner for your intel Mac, or Linux machine. It's the same as above, but we're not including the LLAMA_METAL=1
flag:
Here's a one-liner to run on Windows on WSL:
Ollama is an open-source macOS app (for Apple Silicon) that lets you run, create, and share large language models with a command-line interface. Ollama already has support for Llama 2.
To use the Ollama CLI, download the macOS app at ollama.ai/download. Once you've got it installed, you can download Lllama 2 without having to register for an account or join any waiting lists. Run this in your terminal:
Then you can run the model and chat with it:
Note: Ollama recommends that have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.
MLC LLM is an open-source project that makes it possible to run language models locally on a variety of devices and platforms, including iOS and Android.
For iPhone users, there’s an MLC chat app on the App Store. MLC now has support for the 7B, 13B, and 70B versions of Llama 2, but it’s still in beta and not yet on the Apple Store version, so you’ll need to install TestFlight to try it out. Check out out the instructions for installing the beta version here.
Happy hacking! 🦙