Readme
TheBloke’s quantized version of Deepseek’s Coder 33B Instruct model in GGUF format. The full model card can be found here.
Specifically, this is the deepseek-coder-33b-instruct.Q5_K_M.gguf model, with a 16k context window.
A quantized 33B parameter language model from Deepseek for SOTA repository level code completion
This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 24 seconds. The predict time for this model varies significantly based on the inputs.
TheBloke’s quantized version of Deepseek’s Coder 33B Instruct model in GGUF format. The full model card can be found here.
Specifically, this is the deepseek-coder-33b-instruct.Q5_K_M.gguf model, with a 16k context window.