Lipsync AI models on Replicate enable you to synchronize lip movements in videos or images with new audio tracks, creating realistic talking faces. These tools are ideal for dubbing, animation, content localization, and creative projects.
Recommended Models
Lipsync models generate realistic mouth movements that match new audio tracks.
You can use them to make a still image or existing video appear as if it’s speaking naturally — perfect for dubbing, localization, animation, or creative storytelling.
These models analyze the phonemes and rhythm of the audio, then map those to the facial landmarks or motion of the person in your input image or video.
The result is a synchronized, natural-looking talking face that matches the speech timing and emotion of the audio.
Lipsync models are used across a range of applications:
Some of the most widely used models include:
If you’re starting from a single image, try:
Yes — many users combine lipsync models with translation or speech generation models to create localized videos.
For example, you can:
It depends on your needs:
Absolutely.
You can chain lipsync models with:
A common workflow is:
Prompt → TTS → Lipsync → Video Upscale for full end-to-end video production.
Yes — most official lipsync models on Replicate are licensed for commercial use.
Always check the individual model’s page to confirm usage rights, especially for outputs used in advertising, film, or paid content.
Yes. The zsxkib/multitalk model supports multi-person conversational lipsync — you can upload multiple audio clips and generate a realistic back-and-forth conversation between characters.
Speed depends on model complexity:
Most official models are optimized for near real-time performance through Replicate’s infrastructure.
Try these:
Once you’ve picked a model, you can upload an image, audio file, or text and instantly generate your first lipsynced video.
Recommended Models

bytedance/omni-human
Turns your audio/video/images into professional-quality animated videos
Updated 2 hours ago
142.9K runs

kwaivgi/kling-lip-sync
Add lip-sync to any video with an audio file or text
Updated 1 month ago
17.9K runs
pixverse/lipsync
Generate realistic lipsync animations from audio for high-quality synchronization
Updated 1 month, 1 week ago
3.6K runs
sync/lipsync-2-pro
Studio-grade lipsync in minutes, not weeks
Updated 1 month, 1 week ago
3.4K runs
sync/lipsync-2
Generate realistic lipsyncs with Sync Labs' 2.0 model
Updated 1 month, 1 week ago
8.3K runs
wan-video/wan-2.2-s2v
Generate a video from an audio clip and a reference image
Updated 1 month, 2 weeks ago
4.3K runs


tmappdev/lipsync
Lipsync model using MuseTalk
Updated 2 months, 3 weeks ago
6.9K runs

zsxkib/multitalk
Audio-driven multi-person conversational video generation - Upload audio files and a reference image to create realistic conversations between multiple people
Updated 3 months, 4 weeks ago
2.4K runs


bytedance/latentsync
LatentSync: generate high-quality lip sync animations
Updated 7 months, 1 week ago
75.5K runs


cjwbw/sadtalker
Stylized Audio-Driven Single Image Talking Face Animation
Updated 1 year, 4 months ago
146.2K runs


cjwbw/aniportrait-audio2vid
Audio-Driven Synthesis of Photorealistic Portrait Animations
Updated 1 year, 6 months ago
14.7K runs


chenxwh/video-retalking
Audio-based Lip Synchronization for Talking Head Video
Updated 1 year, 9 months ago
31.2K runs

gauravk95/sadtalker-video
Make your video talk anything
Updated 1 year, 9 months ago
1.4K runs