cjwbw / sadtalker

Stylized Audio-Driven Single Image Talking Face Animation

  • Public
  • 134.8K runs
  • GitHub
  • Paper
  • License
Iterate in playground

Run time and cost

This model costs approximately $0.19 to run on Replicate, or 5 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 138 seconds. The predict time for this model varies significantly based on the inputs.

Readme

original repo: https://github.com/OpenTalker/SadTalker


CVPR 2023

sadtalker

TL;DR:       single portrait image ๐Ÿ™Žโ€โ™‚๏ธ      +       audio ๐ŸŽค       =       talking head video ๐ŸŽž.

</div>

๐Ÿ›Ž Citation

If you find our work useful in your research, please consider citing:

@article{zhang2022sadtalker,
  title={SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation},
  author={Zhang, Wenxuan and Cun, Xiaodong and Wang, Xuan and Zhang, Yong and Shen, Xi and Guo, Yu and Shan, Ying and Wang, Fei},
  journal={arXiv preprint arXiv:2211.12194},
  year={2022}
}

๐Ÿ’— Acknowledgements

Facerender code borrows heavily from zhanglonghaoโ€™s reproduction of face-vid2vid and PIRender. We thank the authors for sharing their wonderful code. In training process, We also use the model from Deep3DFaceReconstruction and Wav2lip. We thank for their wonderful work.

See also these wonderful 3rd libraries we use:

๐Ÿฅ‚ Extensions:

๐Ÿ“ข Disclaimer

This is not an official product of Tencent. This repository can only be used for personal/research/non-commercial purposes.

LOGO: color and font suggestion: ChatGPT, logo font๏ผšMontserrat Alternates .

All the copyright of the demo images and audio are from communities users or the geneartion from stable diffusion. Free free to contact us if you feel uncomfortable.