paragekbote/gemma3-torchao-quant-sparse

A swift setup of gemma-3-4b with INT8 weight-only quantization and sparsity for efficient inference.

Public
68 runs

Want to make some of these yourself?

Run this model