enhance-replicate/flix2.2.14t2v
Run enhance-replicate/flix2.2.14t2v with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
Text prompt for video generation
|
|
negative_prompt |
string
|
色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走
|
Negative prompt to avoid certain elements
|
height |
integer
|
480
Min: 256 Max: 1024 |
Height of the video
|
width |
integer
|
832
Min: 256 Max: 1024 |
Width of the video
|
num_frames |
integer
|
25
Min: 16 Max: 200 |
Number of frames to generate
|
guidance_scale |
number
|
4
Min: 1 Max: 20 |
Guidance scale for generation
|
guidance_scale_2 |
number
|
3
Min: 1 Max: 20 |
Second guidance scale
|
num_inference_steps |
integer
|
10
Min: 10 Max: 100 |
Number of inference steps
|
fps |
integer
|
8
Min: 8 Max: 30 |
Frames per second for output video
|
seed |
integer
|
Random seed for reproducibility
|
|
lora_weight |
number
|
1
Max: 1 |
LoRA weight strength (0.0 to 1.0, 0.0 = no LoRA, 1.0 = full LoRA)
|
{
"type": "object",
"title": "Input",
"required": [
"prompt"
],
"properties": {
"fps": {
"type": "integer",
"title": "Fps",
"default": 8,
"maximum": 30,
"minimum": 8,
"x-order": 8,
"description": "Frames per second for output video"
},
"seed": {
"type": "integer",
"title": "Seed",
"x-order": 9,
"description": "Random seed for reproducibility"
},
"width": {
"type": "integer",
"title": "Width",
"default": 832,
"maximum": 1024,
"minimum": 256,
"x-order": 3,
"description": "Width of the video"
},
"height": {
"type": "integer",
"title": "Height",
"default": 480,
"maximum": 1024,
"minimum": 256,
"x-order": 2,
"description": "Height of the video"
},
"prompt": {
"type": "string",
"title": "Prompt",
"x-order": 0,
"description": "Text prompt for video generation"
},
"num_frames": {
"type": "integer",
"title": "Num Frames",
"default": 25,
"maximum": 200,
"minimum": 16,
"x-order": 4,
"description": "Number of frames to generate"
},
"lora_weight": {
"type": "number",
"title": "Lora Weight",
"default": 1,
"maximum": 1,
"minimum": 0,
"x-order": 10,
"description": "LoRA weight strength (0.0 to 1.0, 0.0 = no LoRA, 1.0 = full LoRA)"
},
"guidance_scale": {
"type": "number",
"title": "Guidance Scale",
"default": 4,
"maximum": 20,
"minimum": 1,
"x-order": 5,
"description": "Guidance scale for generation"
},
"negative_prompt": {
"type": "string",
"title": "Negative Prompt",
"default": "\u8272\u8c03\u8273\u4e3d\uff0c\u8fc7\u66dd\uff0c\u9759\u6001\uff0c\u7ec6\u8282\u6a21\u7cca\u4e0d\u6e05\uff0c\u5b57\u5e55\uff0c\u98ce\u683c\uff0c\u4f5c\u54c1\uff0c\u753b\u4f5c\uff0c\u753b\u9762\uff0c\u9759\u6b62\uff0c\u6574\u4f53\u53d1\u7070\uff0c\u6700\u5dee\u8d28\u91cf\uff0c\u4f4e\u8d28\u91cf\uff0cJPEG\u538b\u7f29\u6b8b\u7559\uff0c\u4e11\u964b\u7684\uff0c\u6b8b\u7f3a\u7684\uff0c\u591a\u4f59\u7684\u624b\u6307\uff0c\u753b\u5f97\u4e0d\u597d\u7684\u624b\u90e8\uff0c\u753b\u5f97\u4e0d\u597d\u7684\u8138\u90e8\uff0c\u7578\u5f62\u7684\uff0c\u6bc1\u5bb9\u7684\uff0c\u5f62\u6001\u7578\u5f62\u7684\u80a2\u4f53\uff0c\u624b\u6307\u878d\u5408\uff0c\u9759\u6b62\u4e0d\u52a8\u7684\u753b\u9762\uff0c\u6742\u4e71\u7684\u80cc\u666f\uff0c\u4e09\u6761\u817f\uff0c\u80cc\u666f\u4eba\u5f88\u591a\uff0c\u5012\u7740\u8d70",
"x-order": 1,
"description": "Negative prompt to avoid certain elements"
},
"guidance_scale_2": {
"type": "number",
"title": "Guidance Scale 2",
"default": 3,
"maximum": 20,
"minimum": 1,
"x-order": 6,
"description": "Second guidance scale"
},
"num_inference_steps": {
"type": "integer",
"title": "Num Inference Steps",
"default": 10,
"maximum": 100,
"minimum": 10,
"x-order": 7,
"description": "Number of inference steps"
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
{
"type": "string",
"title": "Output",
"format": "uri"
}