nateraw / llama-2-70b-chat-awq

llama-2-70b-chat quantized with AWQ and served with vLLM

  • Public
  • 76 runs
  • GitHub
  • Paper
  • License

Input

Output

Run time and cost

This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 4 minutes. The predict time for this model varies significantly based on the inputs.