tomasmcm / sam-7b

Source: SuperAGI/SAM ✦ Quant: TheBloke/SAM-AWQ ✦ SAM (Small Agentic Model), a 7B model that demonstrates impressive reasoning abilities despite its smaller size

  • Public
  • 76 runs
  • L40S
  • Paper
  • License

Input

*string
Shift + Return to add a new line

Text prompt to send to the model.

integer

Maximum number of tokens to generate per output sequence.

Default: 128

number
(minimum: -5, maximum: 5)

Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.

Default: 0

number
(minimum: -5, maximum: 5)

Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.

Default: 0

number
(minimum: 0.01, maximum: 5)

Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.

Default: 0.8

number
(minimum: 0.01, maximum: 1)

Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.

Default: 0.95

integer

Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.

Default: -1

string
Shift + Return to add a new line

List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.

Output

Final Answer: No, elephants cannot fly. Explanation: Elephants are large land animals that are unable to take flight due to their physical structure and lack of wings. They rely on their strong legs and trunks to move around on the ground, and their bodies are not designed for aerial flight. Option A: Yes, elephants can fly. This option is incorrect because elephants do not have the necessary anatomical features to take flight. They do not have wings or any other adaptations that would allow them to fly. Option B: Eleph
Generated in

Run time and cost

This model costs approximately $0.0095 to run on Replicate, or 105 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 10 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Model Card

SAM (Small Agentic Model), a 7B model that demonstrates impressive reasoning abilities despite its smaller size. SAM-7B has outperformed existing SoTA models on various reasoning benchmarks, including GSM8k and ARC-C.

For full details of this model please read our release blog post.

Key Contributions

  • SAM-7B outperforms GPT 3.5, Orca, and several other 70B models on multiple reasoning benchmarks, including ARC-C and GSM8k.
  • Interestingly, despite being trained on a 97% smaller dataset, SAM-7B surpasses Orca-13B on GSM8k.
  • All responses in our fine-tuning dataset are generated by open-source models without any assistance from state-of-the-art models like GPT-3.5 or GPT-4.

Training

  • Trained by: SuperAGI Team
  • Hardware: NVIDIA 6 x H100 SxM (80GB)
  • Model used: Mistral 7B
  • Duration of finetuning: 4 hours
  • Number of epochs: 1
  • Batch size: 16
  • Learning Rate: 2e-5
  • Warmup Ratio: 0.1
  • Optmizer: AdamW
  • Scheduler: Cosine

Example Prompt

The template used to build a prompt for the Instruct model is defined as follows:

<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]

Note that <s> and </s> are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.

Evaluation

These benchmarks show that our model has improved reasoning as compared to orca 2-7b, orca 2-13b and GPT-3.5. Despite being smaller in size, we show better multi-hop reasoning, as shown below: Reasoning Benchmark Performance

Note: Temperature=0.3 is the suggested for optimal performance

Run the model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "SuperAGI/SAM"
tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(model_id)

text = "Can elephants fly?"
inputs = tokenizer(text, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Limitations

SAM is a demonstration that better reasoning can be induced using less but high-quality data generated using OpenSource LLMs. The model is not suitable for conversations and simple Q&A, it performs better in task breakdown and reasoning only. It does not have any moderation mechanisms. Therefore, the model is not suitable for production usage as it doesn’t have guardrails for toxicity, societal bias, and language limitations. We would love to collaborate with the community to build safer and better models.

The SuperAGI AI Team

Anmol Gautam, Arkajit Datta, Rajat Chawla, Ayush Vatsal, Sukrit Chatterjee, Adarsh Jha, Abhijeet Sinha, Rakesh Krishna, Adarsh Deep, Ishaan Bhola, Mukunda NS, Nishant Gaurav.