SmolLM3-3B with Pruna for lightning-fast, memory-efficient AI inference.
Want to make some of these yourself?