You're looking at a specific version of this model. Jump to the model overview.

kcaverly /deepseek-coder-6.7b-instruct:4d38f86e

Input

*string
Shift + Return to add a new line

Chat messages, passed as a json string

integer

Maximum new tokens to generate.

Default: 512

boolean

Whether or not to use sampling; use greedy decoding otherwise.

Default: false

integer

The number of highest probability vocabulary tokens to keep for top-k filtering.

number

If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.

integer

The number of independently computed returned sequences for each element in the batch.

Default: 1

Output

No output yet! Press "Submit" to start a prediction.