sync/react-1

Realistic lipsync with refined human emotion capabilities

33 runs

Readme

react-1 from sync. Labs learns how a character performs from video and gives you the ability to generate new emotional reads and timing variations from the same take.

This provides a third option in the edit when a performance does not land: adjust the acting without reshoots or regenerating scenes.

react-1 works with both filmed footage and AI-generated footage from tools like Runway, Veo, Sora, Pika, Kling, and any other tool you love to use! Capture or generate the shot you want, then iterate on delivery afterward.

What react-1 enables • Direct new performances by uploading audio and guiding emotion • Preserve identity and acting style while modifying the read • Explore variations and intensities not captured on set • Produce dubbing that feels native with full facial behavior reanimation

Model created