titocosta
/
meditron-70b-awq
Meditron-70B-v1.0 from Meditron's open-source suite of medical LLMs, quantized with AWQ.
Run titocosta/meditron-70b-awq with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
Prompt
|
|
prompt_template |
string
|
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>question
{prompt}<|im_end|>
<|im_start|>answer
|
Prompt template
|
system_message |
string
|
You are a helpful AI assistant trained in the medical domain
|
System message
|
max_new_tokens |
integer
|
512
|
The maximum number of tokens the model should generate as output.
|
temperature |
number
|
0.2
|
Model temperature
|
top_p |
number
|
0.95
|
Top P
|
top_k |
integer
|
50
|
Top K
|
{
"type": "object",
"title": "Input",
"required": [
"prompt"
],
"properties": {
"top_k": {
"type": "integer",
"title": "Top K",
"default": 50,
"x-order": 6,
"description": "Top K"
},
"top_p": {
"type": "number",
"title": "Top P",
"default": 0.95,
"x-order": 5,
"description": "Top P"
},
"prompt": {
"type": "string",
"title": "Prompt",
"x-order": 0,
"description": "Prompt"
},
"temperature": {
"type": "number",
"title": "Temperature",
"default": 0.2,
"x-order": 4,
"description": "Model temperature"
},
"max_new_tokens": {
"type": "integer",
"title": "Max New Tokens",
"default": 512,
"x-order": 3,
"description": "The maximum number of tokens the model should generate as output."
},
"system_message": {
"type": "string",
"title": "System Message",
"default": "You are a helpful AI assistant trained in the medical domain",
"x-order": 2,
"description": "System message"
},
"prompt_template": {
"type": "string",
"title": "Prompt Template",
"default": "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>question\n{prompt}<|im_end|>\n<|im_start|>answer\n",
"x-order": 1,
"description": "Prompt template"
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{
"type": "array",
"items": {
"type": "string"
},
"title": "Output",
"x-cog-array-type": "iterator",
"x-cog-array-display": "concatenate"
}