meta / codellama-70b-instruct

A 70 billion parameter Llama tuned for coding and conversation

  • Public
  • 20.6K runs
  • GitHub
  • Paper
  • License

Input

Output

Run time and cost

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 21 seconds.

Readme

CodeLlama is a family of fine-tuned Llama 2 models for coding. This is CodeLlama-70b-Instruct, a 70 billion parameter Llama model tuned for chatting about code.