tomasmcm / evolved-seeker-1.3b

Source: TokenBender/evolvedSeeker_1_3 ✦ Quant: TheBloke/evolvedSeeker_1_3-AWQ ✦ A fine-tuned version of deepseek-ai/deepseek-coder-1.3b-base on 50k instructions for 3 epochs

  • Public
  • 29 runs
  • Paper
  • License
Iterate in playground

Input

*string
Shift + Return to add a new line

Text prompt to send to the model.

integer

Maximum number of tokens to generate per output sequence.

Default: 128

number
(minimum: -5, maximum: 5)

Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.

Default: 0

number
(minimum: -5, maximum: 5)

Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.

Default: 0

number
(minimum: 0.01, maximum: 5)

Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.

Default: 0.8

number
(minimum: 0.01, maximum: 1)

Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.

Default: 0.95

integer

Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.

Default: -1

string
Shift + Return to add a new line

List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.

Output

Here is a simple JavaScript program that achieves this: ```javascript function reverseWordsInSentence(sentence) { return sentence.split(' ').map(word => word.split('').reverse().join('')).join(' '); } console.log(reverseWordsInSentence("Hello world")); // Outputs: "olleH dlrow" ``` This function works by first splitting the sentence into an array of words. Then, for each word, it splits the word into an array of characters, reverses that array, then joins it back together into a word. Finally, it joins all the words back together into a sentence.
Generated in

Run time and cost

This model runs on Nvidia T4 GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

evolvedSeeker-1_3

EvolvedSeeker v0.0.1 (First phase)

This model is a fine-tuned version of deepseek-ai/deepseek-coder-1.3b-base on 50k instructions for 3 epochs.

I have mostly curated instructions from evolInstruct datasets and some portions of glaive coder.

Around 3k answers were modified via self-instruct.

Collaborate or Consult me - Twitter, Discord

Recommended format is ChatML, Alpaca will work but take care of EOT token

Chat Model Inference

Use Gradio inference notebook here that can easily run in free colab - Gradio Inference Notebook

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TokenBender/evolvedSeeker_1_3", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("TokenBender/evolvedSeeker_1_3", trust_remote_code=True).cuda()
messages=[
    { 'role': 'user', 'content': "write a program to reverse letters in each word in a sentence without reversing order of words in the sentence."}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
# 32021 is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=32021)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))

Model description

First model of Project PIC (Partner-in-Crime) in 1.3B range. Almost all the work is pending right now for this model hence v0.0.1 image/png

Intended uses & limitations

Superfast Copilot Run near lossless quantized in 1G RAM. Useful for code dataset curation and evaluation.

Limitations - This is a smol model, so smol brain, may have crammed a few things. Reasoning tests may fail beyond a certain point.

Training procedure

SFT

Training results

Humaneval Score - 68.29%

samples.jsonl file uploaded from eval bench results recently for transparency of evaluation.

The score on eval bench is 67%

image/png

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.0.1
  • Datasets 2.15.0
  • Tokenizers 0.15.0