You're looking at a specific version of this model. Jump to the model overview.

joehoover /mplug-owl:d3b69dbf

This version is disabled.

This version has been disabled because it consistently fails to complete setup.

Input

*string
Shift + Return to add a new line

Prompt to send to mPLUG-Owl.

*file

Image to send to mPLUG-Owl.

integer
(minimum: 1)

Maximum number of tokens to generate. A word is generally 2-3 tokens

Default: 512

number
(minimum: 0.01, maximum: 5)

Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.

Default: 0.75

number
(minimum: 0.01, maximum: 1)

When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens

Default: 1

integer
(minimum: 1, maximum: 500)

When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens. Defaults to 0 (no top-k sampling).

Default: 1

number
(minimum: 0, maximum: 1)

When > 0 and top_k > 1, penalizes new tokens based on their similarity to previous tokens. Can help minimize repitition while maintaining semantic coherence. Set to 0 to disable.

Default: 0

number
(minimum: 0.01, maximum: 5)

Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.

Default: 1

number
(minimum: 0.01, maximum: 5)

Increasing the length_penalty parameter above 1.0 will cause the model to favor longer sequences, while decreasing it below 1.0 will cause the model to favor shorter sequences.

Default: 1

integer
(minimum: 0)

If set to int > 0, all ngrams of size no_repeat_ngram_size can only occur once.

Default: 0

integer
(minimum: -1)

Set seed for reproducible outputs. Set to -1 for random seed.

Default: -1

boolean

provide debugging output in logs

Default: false

Output

No output yet! Press "Submit" to start a prediction.