Week 3 of LLaMA 🦙
Posted by @zeke
Just three weeks ago, Meta AI released a new open-source language model called LLaMA. It is not even fully open-source – only the code has been open-sourced and the weights have not been released widely. (Legitimately, at least.)
Even still, a ridiculous amount of stuff has been built around it.
It feels a lot like the first few weeks of Stable Diffusion. Like Stable Diffusion, LLaMA is easy to run on your own hardware, large enough to be useful, and open-source enough to be tinkered with, as Simon Willison articulated earlier this week.
Here’s just a partial list of what’s happened this week:
- llama.cpp – A port of LLaMA to C/C++ by Georgi Geranov.
- Large language models are having their Stable Diffusion moment – A blog post by Simon Willison summarizing some of the things that happened up to this week.
- Stanford’s Alpaca – A version of LLaMA fine-tuned to follow instructions.
- Stanford Alpaca, and the acceleration of on-device large language model development – A blog post by Simon Willison about Alpaca.
- Running LLaMA on a Raspberry Pi by Artem Andreenko.
- Running LLaMA on a Pixel 5 by Georgi Gerganov.
- Run LLaMA and Alpaca with a one-liner –
npx dalai llama
- alpaca.cpp – llama.cpp but for Alpaca by Kevin Kwok.
- Run LLaMA with Cog and Replicate
- Load LLaMA models instantly by Justine Tunney.
- Do the LLaMA thing, but now in Rust by setzer22.
- Train and run Stanford Alpaca on your own machine from us.
- Alpaca-LoRA: Low-Rank LLaMA Instruct-Tuning by Eric J. Wang.
- Fine-tune LLaMA to speak like Homer Simpson from us.
- Llamero – A GUI application to easily try out Facebook’s LLaMA models by Marcel Pociot.
Open source language models are clearly having a moment. We’re looking forward to seeing what happens next week.