ibm-granite/granite-4.0-h-small

Granite-4.0-H-Small is a 32B parameter long-context instruct model finetuned from Granite-4.0-H-Small-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets.

636 runs

Granite-4.0-H-Small

Model Summary: Granite-4.0-H-Small is a 32B parameter long-context instruct model finetuned from Granite-4.0-H-Small-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging. Granite 4.0 instruct models feature improved instruction following (IF) and tool-calling capabilities, making them more effective in enterprise applications.

Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 4.0 models for languages beyond these languages.

Intended use: The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications.

Capabilities * Summarization * Text classification * Text extraction * Question-answering * Retrieval Augmented Generation (RAG) * Code related tasks * Function-calling tasks * Multilingual dialog use cases * Fill-In-the-Middle (FIM) code completions

Generation: This is a simple example of how to use Granite-4.0-H-Small model.

Install the following libraries:

pip install torch torchvision torchaudio
pip install accelerate
pip install transformers

Then, copy the snippet from the section that is relevant for your use case.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"
model_path = "ibm-granite/granite-4.0-h-small"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
    { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens, 
                        max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output[0])

Expected output:

<|start_of_role|>user<|end_of_role|>Please list one IBM Research laboratory located in the United States. You should only output its name and location.<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>Almaden Research Center, San Jose, California<|end_of_text|>

Tool-calling: Granite-4.0-H-Small comes with enhanced tool calling capabilities, enabling seamless integration with external functions and APIs. To define a list of tools please follow OpenAI’s function definition schema.

This is an example of how to use Granite-4.0-H-Small model tool-calling ability:

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather for a specified city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "Name of the city"
                    }
                },
                "required": ["city"]
            }
        }
    }
]

# change input text as desired
chat = [
    { "role": "user", "content": "What's the weather like in Boston right now?" },
]
chat = tokenizer.apply_chat_template(chat, \
                                     tokenize=False, \
                                     tools=tools, \
                                     add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens, 
                        max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output[0])

Expected output:

<|start_of_role|>system<|end_of_role|>You are a helpful assistant with access to the following tools. You may call one or more tools to assist with the user query.

You are provided with function signatures within <tools></tools> XML tags:
<tools>
{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather for a specified city.", "parameters": {"type": "object", "properties": {"city": {"type": "string", "description": "Name of the city"}}, "required": ["city"]}}}
</tools>

For each tool call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request.<|end_of_text|>
<|start_of_role|>user<|end_of_role|>What's the weather like in Boston right now?<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|><tool_call>
{"name": "get_current_weather", "arguments": {"city": "Boston"}}
</tool_call><|end_of_text|>

Evaluation Results:

Benchmarks Metric Micro Dense H Micro Dense H Tiny MoE H Small MoE
General Tasks
MMLU 5-shot 65.98 67.43 68.65 78.44
MMLU-Pro 5-shot, CoT 44.5 43.48 44.94 55.47
BBH 3-shot, CoT 72.48 69.36 66.34 81.62
AGI EVAL 0-shot, CoT 64.29 59 62.15 70.63
GPQA 0-shot, CoT 30.14 32.15 32.59 40.63
Alignment Tasks
AlpacaEval 2.0 29.49 31.49 30.61 42.48
IFEval Instruct, Strict 85.5 86.94 84.78 89.87
IFEval Prompt, Strict 79.12 81.71 78.1 85.22
IFEval Average 82.31 84.32 81.44 87.55
ArenaHard 25.84 36.15 35.75 46.48
Math Tasks
GSM8K 8-shot 85.45 81.35 84.69 87.27
GSM8K Symbolic 8-shot 79.82 77.5 81.1 87.38
Minerva Math 0-shot, CoT 62.06 66.44 69.64 74
DeepMind Math 0-shot, CoT 44.56 43.83 49.92 59.33
Code Tasks
HumanEval pass@1 80 81 83 88
HumanEval+ pass@1 72 75 76 83
MBPP pass@1 72 73 80 84
MBPP+ pass@1 64 64 69 71
CRUXEval-O pass@1 41.5 41.25 39.63 50.25
BigCodeBench pass@1 39.21 37.9 41.06 46.23
Tool Calling Tasks
BFCL v3 59.98 57.56 57.65 64.69
Multilingual Tasks
MULTIPLE pass@1 49.21 49.46 55.83 57.37
MMMLU 5-shot 55.14 55.19 61.87 69.69
INCLUDE 5-shot 51.62 50.51 53.12 63.97
MGSM 8-shot 28.56 44.48 45.36 38.72
Safety
SALAD-Bench 97.06 96.28 97.77 97.3
AttaQ 86.05 84.44 86.61 86.64
<caption>
Benchmarks # Langs Languages
MMMLU 11 ar, de, en, es, fr, ja, ko, pt, zh, bn, hi
INCLUDE 14 hi, bn, ta, te, ar, de, es, fr, it, ja, ko, nl, pt, zh
MGSM 5 en, es, fr, ja, zh

Model Architecture: Granite-4.0-H-Small baseline is built on a decoder-only MoE transformer architecture. Core components of this architecture are: GQA, Mamba2, MoEs with shared experts, SwiGLU activation, RMSNorm, and shared input/output embeddings.

Model Micro Dense H Micro Dense H Tiny MoE H Small MoE
Embedding size 2560 2048 1536 4096
Number of layers 40 attention 4 attention / 36 Mamba2 4 attention / 36 Mamba2 4 attention / 36 Mamba2
Attention head size 64 64 128 128
Number of attention heads 40 32 12 32
Number of KV heads 8 8 4 8
Mamba2 state size - 128 128 128
Number of Mamba2 heads - 64 48 128
MLP / Shared expert hidden size 8192 8192 1024 1536
Num. Experts - - 64 72
Num. active Experts - - 6 10
Expert hidden size - - 512 768
MLP activation SwiGLU SwiGLU SwiGLU SwiGLU
Sequence length 128K 128K 128K 128K
Position embedding RoPE NoPE NoPE NoPE
# Parameters 3B 3B 7B 32B
# Active parameters 3B 3B 1B 9B

Training Data: Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) a select set of human-curated data.

Infrastructure: We trained the Granite 4.0 Language Models utilizing an NVIDIA GB200 NVL72 cluster hosted in CoreWeave. Intra-rack communication occurs via the 72-GPU NVLink domain, and a non-blocking, full Fat-Tree NDR 400 Gb/s InfiniBand network provides inter-rack communication. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.

Ethical Considerations and Limitations: Granite 4.0 Instruction Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering multiple languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.

Resources - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/ - 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources