kcaverly / deepseek-coder-33b-instruct-gguf

A quantized 33B parameter language model from Deepseek for SOTA repository level code completion

  • Public
  • 1.9K runs
  • GitHub
  • Paper
  • License

Input

Output

Run time and cost

This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 24 seconds. The predict time for this model varies significantly based on the inputs.

Readme

TheBloke’s quantized version of Deepseek’s Coder 33B Instruct model in GGUF format. The full model card can be found here.

Specifically, this is the deepseek-coder-33b-instruct.Q5_K_M.gguf model, with a 16k context window.