xconda/syncup

SyncUp: Audio-driven lip-sync using latent diffusion

Public
53 runs

Run xconda/syncup with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
video
string
Input video file
audio
string
Input audio for person 1 (or single person)
audio2
string
Input audio for person 2 (optional, for two-person mode)
guidance_scale
number
1.5

Min: 1

Max: 3

Audio conditioning strength (1.0-3.0, higher = better lip-sync)
inference_steps
integer
20

Min: 10

Max: 50

Denoising steps (10-50, higher = better quality)
seed
integer
1247
Random seed (0 for random)
enable_deepcache
boolean
True
Enable DeepCache optimization

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "string",
  "title": "Output",
  "format": "uri"
}