Readme
See the official model card for this model here: https://huggingface.co/TheBloke/Llama-2-70B-chat-AWQ
Thank you to TheBloke for sharing this model!
llama-2-70b-chat quantized with AWQ and served with vLLM
This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 4 minutes. The predict time for this model varies significantly based on the inputs.
See the official model card for this model here: https://huggingface.co/TheBloke/Llama-2-70B-chat-AWQ
Thank you to TheBloke for sharing this model!