shaltielshmid / dictalm2.0

  • Public
  • 42 runs
  • License

Input

Output

Run time and cost

This model runs on Nvidia A40 GPU hardware.

Readme

The DictaLM-2.0 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters trained to specialize in Hebrew text.

For full details of this model please read our release blog post.

This is the full-precision base model. You can view and access the full collection of base/instruct unquantized/quantized versions of DictaLM-2.0 here.

Model Architecture

DictaLM-2.0 is based on the Mistral-7B-v0.1 model with the following changes: - An extended tokenizer with 1,000 injected tokens specifically for Hebrew, increasing the compression rate from 5.78 tokens/word to 2.76 tokens/word.
- Continued pretraining on over 190B tokens of naturally occuring text, 50% Hebrew and 50% English.

Notice

DictaLM 2.0 is a pretrained base model and therefore does not have any moderation mechanisms.

Citation

If you use this model, please cite:

[Will be added soon]