tomasmcm / docsgpt-7b-mistral

Source: Arc53/docsgpt-7b-mistral ✦ Quant: TheBloke/docsgpt-7B-mistral-AWQ ✦ DocsGPT is optimized for Documentation (RAG), fine-tuned for providing answers that are based on context

  • Public
  • 74 runs
  • L40S
  • Paper
  • License

Input

*string
Shift + Return to add a new line

Text prompt to send to the model.

integer

Maximum number of tokens to generate per output sequence.

Default: 128

number
(minimum: -5, maximum: 5)

Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.

Default: 0

number
(minimum: -5, maximum: 5)

Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.

Default: 0

number
(minimum: 0.01, maximum: 5)

Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.

Default: 0.8

number
(minimum: 0.01, maximum: 1)

Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.

Default: 0.95

integer

Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.

Default: -1

string
Shift + Return to add a new line

List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.

Output

Aquaman and the Lost Kingdom was released on December 22, 2023, in the United States by Warner Bros. Pictures.
Generated in

Run time and cost

This model costs approximately $0.0037 to run on Replicate, or 270 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 4 seconds. The predict time for this model varies significantly based on the inputs.

Readme

DocsGPT is optimized for Documentation (RAG optimised): Specifically fine-tuned for providing answers that are based on context, making it particularly useful for developers and technical support teams. We used the Lora fine tuning process. This model is fine tuned on top of zephyr-7b-beta

It’s an apache-2.0 license so you can use it for commercial purposes too.

Benchmarks:

Bacon: The BACON test is an internal assessment designed to evaluate the capabilities of neural networks in handling questions with substantial content. It focuses on testing the model’s understanding of context-driven queries, as well as its tendency for hallucination and attention span. The questions in both parts are carefully crafted, drawing from diverse sources such as scientific papers, complex code problems, and instructional prompts, providing a comprehensive test of the model’s ability to process and generate information in various domains. | Model | Score | |------------------------------|-------| | gpt-4 | 8.74 | | DocsGPT-7b-Mistral | 8.64 | | gpt-3.5-turbo | 8.42 | | zephyr-7b-beta | 8.37 | | neural-chat-7b-v3-1 | 7.88 | | Mistral-7B-Instruct-v0.1 | 7.44 | | openinstruct-mistral-7b | 5.86 | | llama-2-13b | 2.29 |

image/png

image/png

MTbench with llm judge:

image/png

#### First turn
Model Turn Score
gpt-4 1 8.956250
gpt-3.5-turbo 1 8.075000
DocsGPT-7b-Mistral 1 7.593750
zephyr-7b-beta 1 7.412500
vicuna-13b-v1.3 1 6.812500
alpaca-13b 1 4.975000
deepseek-coder-6.7b 1 4.506329
#### Second turn
Model Turn Score
gpt-4 2 9.025000
gpt-3.5-turbo 2 7.812500
DocsGPT-7b-Mistral 2 6.740000
zephyr-7b-beta 2 6.650000
vicuna-13b-v1.3 2 5.962500
deepseek-coder-6.7b 2 5.025641
alpaca-13b 2 4.087500
#### Average
Model Score
gpt-4 8.990625
gpt-3.5-turbo 7.943750
DocsGPT-7b-Mistral 7.166875
zephyr-7b-beta 7.031250
vicuna-13b-v1.3 6.387500
deepseek-coder-6.7b 4.764331
alpaca-13b 4.531250

To prepare your prompts make sure you keep this format:

### Instruction
(where the question goes)
### Context
(your document retrieval + system instructions)
### Answer