You're looking at a specific version of this model. Jump to the model overview.

kcaverly /nexus-raven-v2-13b-gguf:adc42ab5

Input

*string
Shift + Return to add a new line

Instruction for model

integer

Maximum new tokens to generate.

Default: -1

number

This parameter used to control the 'warmth' or responsiveness of an AI model based on the LLaMA architecture. It adjusts how likely the model is to generate new, unexpected information versus sticking closely to what it has been trained on. A higher value for this parameter can lead to more creative and diverse responses, while a lower value results in safer, more conservative answers that are closer to those found in its training data. This parameter is particularly useful when fine-tuning models for specific tasks where you want to balance between generating novel insights and maintaining accuracy and coherence.

Default: 0.001

Output

No output yet! Press "Submit" to start a prediction.