
kcaverly / neuralbeagle14-7b-gguf
NeuralBeagle14-7B is (probably) the best 7B model you can find!

kcaverly / nous-capybara-34b-gguf
A SOTA Nous Research finetune of 200k Yi-34B fine tuned on the Capybara dataset.

kcaverly / nous-hermes-2-solar-10.7b-gguf
Nous Hermes 2 - SOLAR 10.7B is the flagship Nous Research model on the SOLAR 10.7B base model.

kcaverly / nous-hermes-2-yi-34b-gguf
Nous Hermes 2 - Yi-34B is a state of the art Yi Fine-tune, fine tuned on GPT-4 generated synthetic data

kcaverly / openchat-3.5-1210-gguf
The "Overall Best Performing Open Source 7B Model" for Coding + Generalization or Mathematical Reasoning

kcaverly / phind-codellama-34b-v2-gguf
A quantized 34B parameter language model from Phind for code completion

kcaverly / nexus-raven-v2-13b-gguf
A quantized 13B parameter language model from NexusFlow for SOTA zero-shot function calling

kcaverly / deepseek-coder-33b-instruct-gguf
A quantized 33B parameter language model from Deepseek for SOTA repository level code completion

kcaverly / deepseek-coder-6.7b-instruct
A ~7B parameter language model from Deepseek for SOTA repository level code completion