typetext
{
"debug": false,
"max_new_tokens": 500,
"min_new_tokens": -1,
"prompt": "Could you briefly explain how self attention works?",
"return_logits": false,
"system_prompt": "You are a machine learning professor who is famous for simple, intuitive explanations.",
"temperature": 0.75,
"top_k": 50,
"top_p": 0.9
}npm install replicate
REPLICATE_API_TOKEN environment variable:export REPLICATE_API_TOKEN=r8_cxb**********************************
This is your API token. Keep it to yourself.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run joehoover/zephyr-7b-alpha using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"joehoover/zephyr-7b-alpha:14ec63365a1141134c41b652fe798633f48b1fd28b356725c4d8842a0ac151ee",
{
input: {
debug: false,
max_new_tokens: 500,
min_new_tokens: -1,
prompt: "Could you briefly explain how self attention works?",
return_logits: false,
system_prompt: "You are a machine learning professor who is famous for simple, intuitive explanations.",
temperature: 0.75,
top_k: 50,
top_p: 0.9
}
}
);
console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN environment variable:export REPLICATE_API_TOKEN=r8_cxb**********************************
This is your API token. Keep it to yourself.
import replicate
Run joehoover/zephyr-7b-alpha using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"joehoover/zephyr-7b-alpha:14ec63365a1141134c41b652fe798633f48b1fd28b356725c4d8842a0ac151ee",
input={
"debug": False,
"max_new_tokens": 500,
"min_new_tokens": -1,
"prompt": "Could you briefly explain how self attention works?",
"return_logits": False,
"system_prompt": "You are a machine learning professor who is famous for simple, intuitive explanations.",
"temperature": 0.75,
"top_k": 50,
"top_p": 0.9
}
)
# The joehoover/zephyr-7b-alpha model can stream output as it's running.
# The predict method returns an iterator, and you can iterate over that output.
for item in output:
# https://replicate.com/joehoover/zephyr-7b-alpha/api#output-schema
print(item, end="")
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN environment variable:export REPLICATE_API_TOKEN=r8_cxb**********************************
This is your API token. Keep it to yourself.
Run joehoover/zephyr-7b-alpha using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "joehoover/zephyr-7b-alpha:14ec63365a1141134c41b652fe798633f48b1fd28b356725c4d8842a0ac151ee",
"input": {
"debug": false,
"max_new_tokens": 500,
"min_new_tokens": -1,
"prompt": "Could you briefly explain how self attention works?",
"return_logits": false,
"system_prompt": "You are a machine learning professor who is famous for simple, intuitive explanations.",
"temperature": 0.75,
"top_k": 50,
"top_p": 0.9
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Sure! Self-attention is a mechanism in neural networks that allows the model to focus on specific parts of the input data that are relevant for the task at hand. Here's a brief overview: 1. The input data is split into smaller pieces, called "tokens" or "words". 2. Each token is represented as a vector, which can be thought of as a "feature vector" that encodes information about that token. 3. The model computes a "query" vector and a "key" vector for each token. 4. The model then calculates a "score" for each pair of query and key vectors. 5. The model then "weights" the scores based on how closely the query matches the key. 6. The model then calculates a "context" vector for each token by summing the weighted scores from all other tokens. 7. The context vector for each token provides information about how important that token is for the overall task, and can be used to generate an output or to feed into another layer of the model. Self-attention is a powerful tool for tasks that involve understanding complex relationships between different parts of the input data, such as machine translation or document classification.
{
"id": "d4vhk3tbsywlhylzl53kbnvjvu",
"model": "joehoover/zephyr-7b-alpha",
"version": "14ec63365a1141134c41b652fe798633f48b1fd28b356725c4d8842a0ac151ee",
"input": {
"debug": false,
"max_new_tokens": 500,
"min_new_tokens": -1,
"prompt": "Could you briefly explain how self attention works?",
"return_logits": false,
"system_prompt": "You are a machine learning professor who is famous for simple, intuitive explanations.",
"temperature": 0.75,
"top_k": 50,
"top_p": 0.9
},
"logs": "You are a machine learning professor who is famous for simple, intuitive explanations.\nPROMPT_TEMPLATE: <|system|>\n{system_prompt}</s>\n<|user|>\n{instruction}</s>\n<|assistant|>\nYour formatted prompt is:\n<|system|>\nYou are a machine learning professor who is famous for simple, intuitive explanations.</s>\n<|user|>\nCould you briefly explain how self attention works?</s>\n<|assistant|>\nDisabling loras\nNo loras were ever loaded, nothing to disable.\nNot using LoRA\nTokens: tensor([[ 1, 523, 28766, 6574, 28766, 28767, 13, 1976, 460, 264,\n5599, 5168, 12192, 693, 349, 8376, 354, 3588, 28725, 21141,\n2468, 10928, 697, 28723, 2, 523, 28766, 1838, 28766, 28767,\n13, 17024, 368, 15643, 7282, 910, 1008, 4501, 3791, 28804,\n2, 523, 28766, 489, 11143, 28766, 28767, 13]],\ndevice='cuda:0')\nhostname: model-14ec6336-bdb00b47d4679dba-gpu-a40-557dfb9fc5-gvgj5",
"output": [
"",
"",
"Sure! ",
"",
"",
"",
"Self-attention ",
"is ",
"a ",
"mechanism ",
"in ",
"neural ",
"networks ",
"that ",
"allows ",
"the ",
"model ",
"to ",
"focus ",
"on ",
"specific ",
"parts ",
"of ",
"the ",
"input ",
"data ",
"that ",
"are ",
"relevant ",
"for ",
"the ",
"task ",
"at ",
"",
"hand.\n",
"\n",
"",
"",
"",
"Here's ",
"a ",
"brief ",
"",
"overview:\n",
"\n",
"",
"",
"1. ",
"The ",
"input ",
"data ",
"is ",
"split ",
"into ",
"smaller ",
"",
"pieces, ",
"called ",
"",
"",
"\"tokens\" ",
"or ",
"",
"",
"\"words\".\n",
"\n",
"",
"",
"2. ",
"Each ",
"token ",
"is ",
"represented ",
"as ",
"a ",
"",
"vector, ",
"which ",
"can ",
"be ",
"thought ",
"of ",
"as ",
"a ",
"",
"\"feature ",
"",
"vector\" ",
"that ",
"",
"encodes ",
"information ",
"about ",
"that ",
"",
"token.\n",
"\n",
"",
"",
"3. ",
"The ",
"model ",
"",
"computes ",
"a ",
"",
"",
"\"query\" ",
"vector ",
"and ",
"a ",
"",
"",
"\"key\" ",
"vector ",
"for ",
"each ",
"",
"token.\n",
"\n",
"",
"",
"4. ",
"The ",
"model ",
"then ",
"",
"calculates ",
"a ",
"",
"",
"\"score\" ",
"for ",
"each ",
"pair ",
"of ",
"query ",
"and ",
"key ",
"",
"vectors.\n",
"\n",
"",
"",
"5. ",
"The ",
"model ",
"then ",
"",
"",
"\"weights\" ",
"the ",
"scores ",
"based ",
"on ",
"how ",
"closely ",
"the ",
"query ",
"matches ",
"the ",
"",
"key.\n",
"\n",
"",
"",
"6. ",
"The ",
"model ",
"then ",
"",
"calculates ",
"a ",
"",
"",
"\"context\" ",
"vector ",
"for ",
"each ",
"token ",
"by ",
"",
"summing ",
"the ",
"",
"weighted ",
"scores ",
"from ",
"all ",
"other ",
"",
"tokens.\n",
"\n",
"",
"",
"7. ",
"The ",
"context ",
"vector ",
"for ",
"each ",
"token ",
"provides ",
"information ",
"about ",
"how ",
"important ",
"that ",
"token ",
"is ",
"for ",
"the ",
"overall ",
"",
"task, ",
"and ",
"can ",
"be ",
"used ",
"to ",
"generate ",
"an ",
"output ",
"or ",
"to ",
"feed ",
"into ",
"another ",
"layer ",
"of ",
"the ",
"",
"model.\n",
"\n",
"",
"",
"",
"",
"Self-attention ",
"is ",
"a ",
"powerful ",
"tool ",
"for ",
"tasks ",
"that ",
"involve ",
"understanding ",
"complex ",
"relationships ",
"between ",
"different ",
"parts ",
"of ",
"the ",
"input ",
"",
"data, ",
"such ",
"as ",
"machine ",
"translation ",
"or ",
"document ",
"",
"",
"classification."
],
"data_removed": false,
"error": null,
"source": "web",
"status": "succeeded",
"created_at": "2023-10-13T17:10:40.933795Z",
"started_at": "2023-10-13T17:10:40.873404Z",
"completed_at": "2023-10-13T17:10:58.892613Z",
"urls": {
"cancel": "https://api.replicate.com/v1/predictions/d4vhk3tbsywlhylzl53kbnvjvu/cancel",
"get": "https://api.replicate.com/v1/predictions/d4vhk3tbsywlhylzl53kbnvjvu"
},
"metrics": {
"predict_time": 18.019209,
"total_time": 17.958818
}
}