okaris
/
progen2
ProGen: Language Modeling for Protein Engineering
Run okaris/progen2 with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
model |
string
(enum)
|
progen2-small
Options: progen2-small, progen2-medium, progen2-oas, progen2-base, progen2-large, progen2-BFD90, progen2-xlarge |
Model id
|
device |
string
|
cuda:0
|
Device to run model on
|
rng_seed |
integer
|
42
|
Random number generator seed
|
rng_deterministic |
boolean
|
True
|
Use deterministic RNG
|
p |
number
|
0.9
|
Probability of sampling from top-k
|
t |
number
|
0.8
|
Temperature for top-k sampling
|
max_length |
integer
|
1024
|
Maximum length of generated text
|
num_samples |
integer
|
2
|
Number of samples to generate
|
fp16 |
boolean
|
True
|
Use mixed precision
|
context |
string
|
1
|
Context to use for generation
|
sanity |
boolean
|
True
|
Run sanity check
|
{
"type": "object",
"title": "Input",
"properties": {
"p": {
"type": "number",
"title": "P",
"default": 0.9,
"x-order": 4,
"description": "Probability of sampling from top-k"
},
"t": {
"type": "number",
"title": "T",
"default": 0.8,
"x-order": 5,
"description": "Temperature for top-k sampling"
},
"fp16": {
"type": "boolean",
"title": "Fp16",
"default": true,
"x-order": 8,
"description": "Use mixed precision"
},
"model": {
"enum": [
"progen2-small",
"progen2-medium",
"progen2-oas",
"progen2-base",
"progen2-large",
"progen2-BFD90",
"progen2-xlarge"
],
"type": "string",
"title": "model",
"description": "Model id",
"default": "progen2-small",
"x-order": 0
},
"device": {
"type": "string",
"title": "Device",
"default": "cuda:0",
"x-order": 1,
"description": "Device to run model on"
},
"sanity": {
"type": "boolean",
"title": "Sanity",
"default": true,
"x-order": 10,
"description": "Run sanity check"
},
"context": {
"type": "string",
"title": "Context",
"default": "1",
"x-order": 9,
"description": "Context to use for generation"
},
"rng_seed": {
"type": "integer",
"title": "Rng Seed",
"default": 42,
"x-order": 2,
"description": "Random number generator seed"
},
"max_length": {
"type": "integer",
"title": "Max Length",
"default": 1024,
"x-order": 6,
"description": "Maximum length of generated text"
},
"num_samples": {
"type": "integer",
"title": "Num Samples",
"default": 2,
"x-order": 7,
"description": "Number of samples to generate"
},
"rng_deterministic": {
"type": "boolean",
"title": "Rng Deterministic",
"default": true,
"x-order": 3,
"description": "Use deterministic RNG"
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{
"type": "string",
"title": "Output"
}