hamelsmu / axolotl

  • Public
  • 0 runs
  • GitHub

Run hamelsmu/axolotl with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
None
max_new_tokens
integer
512
The maximum number of tokens the model should generate as output.
temperature
number
0.7
The value used to modulate the next token probabilities.
do_sample
boolean
True
Whether or not to use sampling; otherwise use greedy decoding.
top_p
number
0.95
A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
top_k
integer
50
The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
prompt_template
string
### System: Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Translate the input from English to Hinglish ### Input: {prompt} ### Response:
The template used to format the prompt before passing it to the model. For no template, you can set this to `{prompt}`.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "array",
  "items": {
    "type": "string"
  },
  "title": "Output",
  "x-cog-array-type": "iterator",
  "x-cog-array-display": "concatenate"
}