peter65374 / openbuddy-llemma-34b-gguf

This is a cog implementation of "openbuddy-llemma-34b" 4-bit quantization model.

  • Public
  • 264 runs
  • GitHub
  • Paper
  • License

Input

Output

Run time and cost

This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 6 seconds. The predict time for this model varies significantly based on the inputs.