Readme
See the official model card for more information: https://huggingface.co/CausalLM/14B
The model specifically being served here is this one from TheBloke: https://huggingface.co/TheBloke/CausalLM-14B-AWQ
CausalLM/14B model with AWQ quantization. Perhaps better than all existing models < 70B, in most quantitative evaluations...
This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 4 seconds.
See the official model card for more information: https://huggingface.co/CausalLM/14B
The model specifically being served here is this one from TheBloke: https://huggingface.co/TheBloke/CausalLM-14B-AWQ