You're looking at a specific version of this model. Jump to the model overview.
kcaverly /deepseek-coder-6.7b-instruct:0c46b835
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
messages |
string
|
Chat messages, passed as a json string
|
|
max_new_tokens |
integer
|
512
|
Maximum new tokens to generate.
|
do_sample |
boolean
|
False
|
Whether or not to use sampling; use greedy decoding otherwise.
|
top_k |
integer
|
The number of highest probability vocabulary tokens to keep for top-k filtering.
|
|
top_p |
number
|
If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
|
|
num_return_sequences |
integer
|
1
|
The number of independently computed returned sequences for each element in the batch.
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'items': {'type': 'string'},
'title': 'Output',
'type': 'array',
'x-cog-array-display': 'concatenate',
'x-cog-array-type': 'iterator'}