tomasmcm / sensei-7b-v1

Source: SciPhi/Sensei-7B-V1 ✦ Quant: TheBloke/Sensei-7B-V1-AWQ ✦ Sensei is specialized in performing RAG over detailed web search results

  • Public
  • 34 runs
  • L40S
  • Paper
  • License

Input

*string
Shift + Return to add a new line

Text prompt to send to the model.

integer

Maximum number of tokens to generate per output sequence.

Default: 128

number
(minimum: -5, maximum: 5)

Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.

Default: 0

number
(minimum: -5, maximum: 5)

Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.

Default: 0

number
(minimum: 0.01, maximum: 5)

Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.

Default: 0.8

number
(minimum: 0.01, maximum: 1)

Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.

Default: 0.95

integer

Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.

Default: -1

string
Shift + Return to add a new line

List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.

Output

"\nBarack Obama's middle name is \"Hussein\", which is also a common Arabic name meaning \"good\" or \"handsome\" [2]. This name holds personal significance as it was inherited from his father, also named Barack Hussein Obama, who was born in Kenya [1][3]. During his presidency, Obama's full name, Barack Hussein Obama II, was frequently used, emphasizing his heritage and cultural background [3]. However, due to political and cultural dynamics, his middle name sometimes attracted attention and became a subject of debate and speculation [1].\n\nIn summary, Barack Obama's middle name is \"Hussein,\" reflecting his family lineage and his African heritage, which played a role in the historical context of his presidency.\n\n", "other_queries": ["Political implications of Barack Obama's middle name", "Barack Obama's upbringing and cultural identity", "Barack Obama's response to questions about his middle name", "Influence of Obama's middle name on his presidential campaign", "Barack Obama's legacy and his middle name"]}
Generated in

Run time and cost

This model costs approximately $0.0076 to run on Replicate, or 131 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 8 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Sensei-7B-V1 Model Card

Sensei-7B-V1 is a Large Language Model (LLM) fine-tuned from OpenPipe’s mistral-ft-optimized-1218, which is based on Mistral-7B. Sensei-7B-V1 was was fine-tuned with a fully synthetic dataset to specialize at performing retrieval-augmented generation (RAG) over detailed web search results. This model strives to specialize in using search, such as AgentSearch, to generate accurate and well-cited summaries from a range of search results, providing more accurate answers to user queries. Please refer to the docs here for more information on how to run Sensei end-to-end.

Currently, Sensei is available via hosted api at https://www.sciphi.ai. You can try a demonstration here.

Model Architecture

Base Model: mistral-ft-optimized-1218

Architecture Features: - Transformer-based model - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer

Using the Model

It is recommended to use a single search query. The model will return an answer using search results as context.

Using the AgentSearch package an example is shown below.

export SCIPHI_API_KEY=MY_SCIPHI_API_KEY
# Use `Sensei` for LLM RAG w/ AgentSearch
python -m agent_search.scripts.run_rag run --query="What is Fermat's last theorem?"

Alternatively, you may provide your own search context directly to the model by adhereing to the following format:

### Instruction: 
Your task is to perform retrieval augmented generation (RAG) over the given query and search results. Return your answer in a json format that includes a summary of the search results and a list of related queries. 

Query:
{prompt}
\n\n
Search Results:
{context}
\n\n
Query:
{prompt}

### Response:
{"summary":

Note: The inclusion of the text ‘{“summary”:’ following the Response footer is intentional. This ensures that the model responds with the proper json format, failure to include this leading prefix can cause small deviaitons. Combining the output with the leading string ‘{“summary”:’ results in a properly formatted JSON with keys ‘summary’ and ‘other_queries’.

Built with Axolotl

References

  1. OpenPipe AI. (2023). Model Card for mistral-ft-optimized-1218. The mistral-ft-1218 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters optimized for downstream fine-tuning on a variety of tasks. For full details, please refer to the release blog post. Model Architecture: Transformer with Grouped-Query Attention, Sliding-Window Attention, and Byte-fallback BPE tokenizer. Link