ibm-granite/granite-4.1-8b

Granite-4.1-8B is a 8B parameter long-context instruct model finetuned from Granite-4.1-8B-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets.

54 runs

Granite-4.1-8B

Model Summary: Granite-4.1-8B is a 8B parameter long-context instruct model finetuned from Granite-4.1-8B-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. Granite 4.1 models have gone through an improved post-training pipeline, including supervised finetuning and reinforcement learning alignment, resulting in enhanced tool calling, instruction following, and chat capabilities.

Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 4.1 models for languages beyond these languages.

Intended use: The model is designed to follow general instructions and can serve as the foundation for AI assistants across diverse domains, including business applications, as well as for LLM agents equipped with tool-use capabilities.

Capabilities * Summarization * Text classification * Text extraction * Question-answering * Retrieval Augmented Generation (RAG) * Code related tasks * Function-calling tasks * Multilingual dialog use cases * Fill-In-the-Middle (FIM) code completions

Generation: This is a simple example of how to use Granite-4.1-8B model.

Install the following libraries:

pip install torch torchvision torchaudio
pip install accelerate
pip install transformers

Then, copy the snippet from the section that is relevant for your use case.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"
model_path = "ibm-granite/granite-4.1-8b"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
    { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens, 
                        max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output[0])

Expected output:

<|start_of_role|>user<|end_of_role|>Please list one IBM Research laboratory located in the United States. You should only output its name and location.<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>IBM Almaden Research Laboratory, San Jose, California, United States.<|end_of_text|>

Tool-calling: Granite-4.1-8B comes with enhanced tool calling capabilities, enabling seamless integration with external functions and APIs. To define a list of tools please follow OpenAI’s function definition schema.

This is an example of how to use Granite-4.1-8B model tool-calling ability:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"
model_path = "ibm-granite/granite-4.1-8b"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather for a specified city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "Name of the city"
                    }
                },
                "required": ["city"]
            }
        }
    }
]

# change input text as desired
chat = [
    { "role": "user", "content": "What's the weather like in Boston right now?" },
]
chat = tokenizer.apply_chat_template(chat, \
                                     tokenize=False, \
                                     tools=tools, \
                                     add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens, 
                        max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output[0])

Expected output:

<|start_of_role|>system<|end_of_role|>You are a helpful assistant with access to the following tools. You may call one or more tools to assist with the user query.
You are provided with function signatures within <tools></tools> XML tags:
<tools>
{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather for a specified city.", "parameters": {"type": "object", "properties": {"city": {"type": "string", "description": "Name of the city"}}, "required": ["city"]}}}
</tools>
For each tool call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request.<|end_of_text|>
<|start_of_role|>user<|end_of_role|>What's the weather like in Boston right now?<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|><tool_call>
{"name": "get_current_weather", "arguments": {"city": "Boston"}}
</tool_call><|end_of_text|>

Evaluation Results:

Benchmarks Metric 3B Dense 8B Dense 30B Dense
General Tasks
MMLU 5-shot 67.02 73.84 80.16
MMLU-Pro 5-shot, CoT 49.83 55.99 64.09
BBH 3-shot, CoT 75.83 80.51 83.74
AGI EVAL 0-shot, CoT 65.16 72.43 77.80
GPQA 0-shot, CoT 31.70 41.96 45.76
SimpleQA 3.68 4.82 6.81
Alignment Tasks
AlpacaEval 2.0 38.57 50.08 56.16
IFEval Avg 82.30 87.06 89.65
ArenaHard 37.80 68.98 71.02
MTBench Avg 7.57 8.61 8.61
Math Tasks
GSM8K 8-shot 86.88 92.49 94.16
GSM Symbolic 8-shot 81.32 83.70 75.70
Minerva Math 0-shot, CoT 67.94 80.10 81.32
DeepMind Math 0-shot, CoT 64.64 80.07 81.93
Code Tasks
HumanEval pass@1 81.71 85.37 88.41
HumanEval+ pass@1 76.83 79.88 85.37
MBPP pass@1 71.16 87.30 85.45
MBPP+ pass@1 62.17 73.81 73.54
CRUXEval-O pass@1 40.75 47.63 55.75
BigCodeBench pass@1 32.19 35.00 38.77
MULTIPLE pass@1 52.54 60.26 62.31
Eval+ Avg pass@1 67.05 80.21 82.66
Tool Calling Tasks
BFCL v3 60.80 68.27 73.68
Multilingual Tasks
MMMLU 5-shot 57.61 64.84 73.71
INCLUDE 5-shot 52.05 58.89 67.26
MGSM 8-shot 70.00 82.32 71.12
Safety
SALAD-Bench 93.95 95.80 96.41
AttaQ 81.88 81.19 85.76
Tulu3 Safety Eval Avg 66.84 75.57 78.19
<caption>
Benchmarks # Langs Languages
MMMLU 11 ar, de, en, es, fr, ja, ko, pt, zh, bn, hi
INCLUDE 14 hi, bn, ta, te, ar, de, es, fr, it, ja, ko, nl, pt, zh
MGSM 5 en, es, fr, ja, zh

Model Architecture:

Granite-4.1-8B baseline is built on a decoder-only dense transformer architecture. Core components of this architecture are: GQA, RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings.

Model 3B Dense 8B Dense 30B Dense
Embedding size 2560 4096 4096
Number of layers 40 40 64
Attention head size 64 128 128
Number of attention heads 40 32 32
Number of KV heads 8 8 8
MLP / Shared expert hidden size 8192 12800 32768
MLP activation SwiGLU SwiGLU SwiGLU
Sequence length 131072 131072 131072
Position embedding RoPE RoPE RoPE
# Parameters 3B 8B 30B

Training Data: Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) a select set of human-curated data.

Supervised Fine-Tuning and Reinforcement Learning: Instruct model has been fine tuned with significantly improved SFT-pipeline and Reinforcement learning pipelines with high quality mix of various datasets as mentioned above. With rigorous SFT-RL cycles we have improved Granite-4.1 model’s tool calling, instruction following and chat capabilities. For further details please check our Granite-4.1 Blog.

Infrastructure: We trained the Granite 4.1 Language Models utilizing an NVIDIA GB200 NVL72 cluster hosted in CoreWeave. Intra-rack communication occurs via the 72-GPU NVLink domain, and a non-blocking, full Fat-Tree NDR 400 Gb/s InfiniBand network provides inter-rack communication. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.

Ethical Considerations and Limitations: Granite 4.1 Instruction Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering multiple languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such cases, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. We urge the community to use this model with proper safety testing and tuning tailored for their specific tasks. To enhance safety in enterprise deployments, we recommend using Granite 4.1 Language models alongside Granite Guardian, a model designed to detect and flag risks in inputs and outputs across key dimensions outlined in the IBM AI Risk Atlas.

Resources - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/ - 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources

Model created