lucataco / numinamath-7b-tir

NuminaMath is a series of language models that are trained to solve math problems using tool-integrated reasoning (TIR)

  • Public
  • 36 runs
  • L40S
  • Paper
  • License
Iterate in playground

Input

string
Shift + Return to add a new line

Prompt

Default: ""

string
Shift + Return to add a new line

System prompt to send to the model. This is prepended to the prompt and helps guide system behavior. Ignored for non-chat models.

Default: "You are a helpful assistant."

integer

The minimum number of tokens the model should generate as output.

Default: 0

integer

The maximum number of tokens the model should generate as output.

Default: 512

number

The value used to modulate the next token probabilities.

Default: 0.6

number

A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).

Default: 0.9

integer

The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).

Default: 50

number

Presence penalty

Default: 0

number

Frequency penalty

Default: 0

string
Shift + Return to add a new line

A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.

Output

1. **Understand the Polynomial and its Roots**: The polynomial given is \(x^2 + kx + 36\). Let the roots of the polynomial be \(r_1\) and \(r_2\). 2. **Relationships Between Roots and Coefficients**: For a polynomial \(x^2 + bx + c = 0\), the sum of the roots \(r_1 + r_2 = -b\) and the product of the roots \(r_1 \cdot r_2 = c\). In our case, \(r_1 + r_2 = -k\) and \(r_1 \cdot r_2 = 36\). 3. **Conditions for Distinct Integer Roots**: We need to find pairs of distinct integers \((r_1, r_2)\) such that their product is 36. Then, for each pair, we will determine the corresponding value of \(k\). 4. **Check for Distinct Pairs**: Ensure \(r_1 \neq r_2\). 5. **Calculate the Values of \(k\)**: For each valid pair \((r_1, r_2)\), compute \(k = -(r_1 + r_2)\) and count the distinct values. Let's implement this in Python to find the number of distinct values of \(k\). ```python import sympy as sp # Define the product of the roots product = 36 # Find all pairs of integers (r1, r2) such that r1 * r2 = product integer_pairs = [] for i in range(1, product + 1): if product % i == 0: j = product // i integer_pairs.append((i, j)) integer_pairs.append((j, i)) # Include both (i, j) and (j, i) # Filter pairs to ensure they are distinct and r1 != r2 distinct_pairs = [(r1, r2) for r1, r2 in integer_pairs if r1 != r2] # Calculate the corresponding values of k k_values = set(-(r1 + r2) for r1, r2 in distinct_pairs) # Output the number of
Generated in

Run time and cost

This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

Model Card for NuminaMath 7B TIR

NuminaMath is a series of language models that are trained to solve math problems using tool-integrated reasoning (TIR). NuminaMath 7B TIR won the first progress prize of the AI Math Olympiad (AIMO), with a score of 29/50 on the public and private tests sets.

image/png

This model is a fine-tuned version of deepseek-ai/deepseek-math-7b-base with two stages of supervised fine-tuning:

  • Stage 1: fine-tune the base model on a large, diverse dataset of natural language math problems and solutions, where each solution is templated with Chain of Thought (CoT) to facilitate reasoning.
  • Stage 2: fine-tune the model from Stage 1 on a synthetic dataset of tool-integrated reasoning, where each math problem is decomposed into a sequence of rationales, Python programs, and their outputs. Here we followed Microsoft’s ToRA paper and prompted GPT-4 to produce solutions in the ToRA format with code execution feedback. Fine-tuning on this data produces a reasoning agent that can solve mathematical problems via a mix of natural language reasoning and use of the Python REPL to compute intermediate results.

Model description

  • Model type: A 7B parameter math LLM fine-tuned in two stages of supervised fine-tuning, first on a dataset with math problem-solution pairs and then on a synthetic dataset with examples of multi-step generations using tool-integrated reasoning.
  • Language(s) (NLP): Primarily English
  • License: Apache 2.0
  • Finetuned from model: deepseek-ai/deepseek-math-7b-base

Model Sources

Intended uses & limitations

Here’s how you can run the model using the pipeline() function from 🤗 Transformers:

import re
import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="AI-MO/NuminaMath-7B-TIR", torch_dtype=torch.bfloat16, device_map="auto")

messages = [
    {"role": "user", "content": "For how many values of the constant $k$ will the polynomial $x^{2}+kx+36$ have two distinct integer roots?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

gen_config = {
    "max_new_tokens": 1024,
    "do_sample": False,
    "stop_strings": ["```output"], # Generate until Python code block is complete
    "tokenizer": pipe.tokenizer,
}

outputs = pipe(prompt, **gen_config)
text = outputs[0]["generated_text"]
print(text)

# WARNING: This code will execute the python code in the string. We show this for eductional purposes only.
# Please refer to our full pipeline for a safer way to execute code.
python_code = re.findall(r"```python(.*?)```", text, re.DOTALL)[0]
exec(python_code)

The above executes a single step of Python code - for more complex problems, you will want to run the logic for several steps to obtain the final solution.

Bias, Risks, and Limitations

NuminaMath 7B TIR was created to solve problems in the narrow domain of competition-level mathematics. As a result, the model should not be used for general chat applications. With greedy decoding, we find the model is capable of solving problems at the level of AMC 12, but often struggles generate a valid solution on harder problems at the AIME and Math Olympiad level. The model also struggles to solve geometry problems, likely due to it’s limited capacity and lack of other modalities like vision.

Training procedure

Training hyperparameters

The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4.0

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.3.1
  • Datasets 2.18.0
  • Tokenizers 0.19.1

Citation

If you find NuminaMath 7B TIR is useful in your work, please cite it with:

@misc{numina_math_7b,
  author = {Edward Beeching and Shengyi Costa Huang and Albert Jiang and Jia Li and Benjamin Lipkin and Zihan Qina and Kashif Rasul and Ziju Shen and Roman Soletskyi and Lewis Tunstall},
  title = {NuminaMath 7B TIR},
  year = {2024},
  publisher = {Numina & Hugging Face},
  journal = {Hugging Face repository},
  howpublished = {\url{https://huggingface.co/AI-MO/NuminaMath-7B-TIR}}
}