enhance-replicate/flix_lipsync_test

Public
71 runs

Run enhance-replicate/flix_lipsync_test with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
checkpoint
None
Wav2Lip_GAN
None
enhancer
None
none
None
output_resolution
None
360p
Output video resolution
fps
number
8
Override FPS (esp. for static image)
face
string
Path to face video/image
pads
integer
4
Vertical mouth offset (-15 to 15)
seed
integer
42
Random seed for reproducibility
debug
boolean
False
Enable verbose logging and timers
static
boolean
False
Use only first frame
pingpong
boolean
False
Pingpong frames if audio longer
cache_dir
string
cache
Directory to store face cache files
face_mode
integer
0
Crop style affecting mouth region
hq_output
boolean
False
HQ output (PNG -> mp4)
audio_path
string
Path to audio or video with speech
resize_factor
integer
1
Downscale input frames
use_face_cache
boolean
True
Cache face preprocessing for faster reuse of same input
wav2lip_batch_size
integer
128
Batch size for Wav2Lip

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "string",
  "title": "Output",
  "format": "uri"
}