founderfeed/multi-model-lipsync

Public
3 runs

Run founderfeed/multi-model-lipsync with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
model
None
sync-lipsync-2-pro
Select the lipsync model to use
audio
string
Input audio file (MP3, WAV, etc.)
image
string
Input image (required for image2video models)
video
string
Input video (required for video2video models)
duration
integer
Video duration in seconds (4-10 for WAN 2.5 only)
voice_id
string
en_AOT
Voice ID for TTS (Kling only; used with text)
audio_text
string
Spoken content for lipsync (Kling if no audio; Veo required; WAN 2.5 if no audio).
resolution
string
Video resolution (e.g., 480p, 720p, 1080p for WAN 2.5, Veo models)
voice_speed
number
1

Min: 0.8

Max: 2

Voice speed for TTS (Kling only; used with text)
aspect_ratio
string
Video aspect ratio (e.g., 16:9, 9:16 for Veo models only)
replicate_api_key
string
Your Replicate API key (starts with r8_). SECURITY: This key is automatically secured and never logged or exposed in outputs.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "string",
  "title": "Output",
  "format": "uri"
}