lucataco/stable-avatar

End-to-end video diffusion transformer, which synthesizes infinite-length high-quality audio-driven avatar videos without any post-processing

Public
190 runs

Run time and cost

This model runs on Nvidia H100 GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

StableAvatar

StableAvatar: Infinite-Length Audio-Driven Avatar Video Generation
*Shuyuan Tu<sup>1</sup>, Yueming Pan<sup>3</sup>, Yinming Huang<sup>1</sup>, Xintong Han<sup>4</sup>, Zhen Xing<sup>1</sup>, Qi Dai<sup>2</sup>, Chong Luo<sup>2</sup>, Zuxuan Wu<sup>1</sup>, Yu-Gang Jiang<sup>1</sup>
[<sup>1</sup>Fudan University; <sup>2</sup>Microsoft Research Asia; <sup>3</sup>Xi’an Jiaotong University; <sup>4</sup>Tencent Inc]

<span>Audio-driven avatar videos generated by StableAvatar, showing its power to synthesize infinite-length and ID-preserving videos. All videos are directly synthesized by StableAvatar without the use of any face-related post-processing tools, such as the face-swapping tool FaceFusion or face restoration models like GFP-GAN and CodeFormer.</span>


<span>Comparison results between StableAvatar and state-of-the-art (SOTA) audio-driven avatar video generation models highlight the superior performance of StableAvatar in delivering infinite-length, high-fidelity, identity-preserving avatar animation.</span>

Overview

model architecture
The overview of the framework of StableAvatar.

Current diffusion models for audio-driven avatar video generation struggle to synthesize long videos with natural audio synchronization and identity consistency. This paper presents StableAvatar, the first end-to-end video diffusion transformer that synthesizes infinite-length high-quality videos without post-processing. Conditioned on a reference image and audio, StableAvatar integrates tailored training and inference modules to enable infinite-length video generation. We observe that the main reason preventing existing models from generating long videos lies in their audio modeling. They typically rely on third-party off-the-shelf extractors to obtain audio embeddings, which are then directly injected into the diffusion model via cross-attention. Since current diffusion backbones lack any audio-related priors, this approach causes severe latent distribution error accumulation across video clips, leading the latent distribution of subsequent segments to drift away from the optimal distribution gradually. To address this, StableAvatar introduces a novel Time-step-aware Audio Adapter that prevents error accumulation via time-step-aware modulation. During inference, we propose a novel Audio Native Guidance Mechanism to further enhance the audio synchronization by leveraging the diffusion’s own evolving joint audio-latent prediction as a dynamic guidance signal. To enhance the smoothness of the infinite-length videos, we introduce a Dynamic Weighted Sliding-window Strategy that fuses latent over time. Experiments on benchmarks show the effectiveness of StableAvatar both qualitatively and quantitatively.

News

  • [2025-8-11]:🔥 The project page, code, technical report and a basic model checkpoint are released. Further lora training codes, the evaluation dataset and StableAvatar-pro will be released very soon. Stay tuned!

🛠️ To-Do List

  • [x] StableAvatar-1.3B-basic
  • [x] Inference Code
  • [x] Data Pre-Processing Code (Audio Extraction)
  • [x] Data Pre-Processing Code (Vocal Separation)
  • [x] Training Code
  • [ ] Lora Training Code (Before 2025.8.17)
  • [ ] Lora Finetuning Code (Before 2025.8.17)
  • [ ] Full Finetuning Code (Before 2025.8.17)
  • [ ] Inference Code with Audio Native Guidance
  • [ ] StableAvatar-pro

🔑 Quickstart

For the basic version of the model checkpoint (Wan2.1-1.3B-based), it supports generating infinite-length videos at a 480x832 or 832x480 or 512x512 resolution. If you encounter insufficient memory issues, you can appropriately reduce the number of animated frames or the resolution of the output.

🧱 Environment setup

pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu124
pip install -r requirements.txt
# Optional to install flash_attn to accelerate attention computation
pip install flash_attn

🧱 Download weights

If you encounter connection issues with Hugging Face, you can utilize the mirror endpoint by setting the environment variable: export HF_ENDPOINT=https://hf-mirror.com. Please download weights manually as follows:

pip install "huggingface_hub[cli]"
cd StableAvatar
mkdir checkpoints
huggingface-cli download FrancisRing/StableAvatar --local-dir ./checkpoints

All the weights should be organized in models as follows The overall file structure of this project should be organized as follows:

StableAvatar/
├── accelerate_config
├── deepspeed_config
├── examples
├── wan
├── checkpoints
│   ├── Kim_Vocal_2.onnx
│   ├── wav2vec2-base-960h
│   ├── Wan2.1-Fun-V1.1-1.3B-InP
│   └── StableAvatar-1.3B
├── inference.py
├── inference.sh
├── train_1B_square.py
├── train_1B_square.sh
├── train_1B_vec_rec.py
├── train_1B_vec_rec.sh
├── audio_extractor.py
├── vocal_seperator.py
├── requirement.txt 

🧱 Audio Extraction

Given the target video file (.mp4), you can use the following command to obtain the corresponding audio file (.wav):

python audio_extractor.py --video_path="path/test/video.mp4" --saved_audio_path="path/test/audio.wav"

🧱 Vocal Separation

As noisy background music may negatively impact the performance of StableAvatar to some extents, you can further separate the vocal from the audio file for better lip synchronization. Given the path to an audio file (.wav), you can run the following command to extract the corresponding vocal signals:

pip install audio-separator
python vocal_seperator.py --audio_separator_model_file="path/StableAvatar/checkpoints/Kim_Vocal_2.onnx" --audio_file_path="path/test/audio.wav" --saved_vocal_path="path/test/vocal.wav"

🧱 Base Model inference

A sample configuration for testing is provided as inference.sh. You can also easily modify the various configurations according to your needs.

bash inference.sh

Wan2.1-1.3B-based StableAvatar supports audio-driven avatar video generation at three different resolution settings: 512x512, 480x832, and 832x480. You can modify “–width” and “–height” in inference.sh to set the resolution of the animation. “–output_dir” in inference.sh refers to the saved path of the generated animation. “–validation_reference_path”, “–validation_driven_audio_path”, and “–validation_prompts” in inference.sh refer to the path of the given reference image, the path of the given audio, and the text prompts respectively. Prompts are also very important. It is recommended to [Description of first frame]-[Description of human behavior]-[Description of background (optional)]. “–pretrained_model_name_or_path”, “–pretrained_wav2vec_path”, and “–transformer_path” in inference.sh are the paths of pretrained Wan2.1-1.3B weights, pretrained Wav2Vec2.0 weights, and pretrained StableAvatar weights, respectively. “–sample_steps”, “–overlap_window_length”, and “–clip_sample_n_frames” refer to the total number of inference steps, the overlapping context length between two context windows, and the synthesized frame number in a batch/context window, respectively. Notably, the recommended --sample_steps range is [30-50], more steps bring higher quality. The recommended --overlap_window_length range is [5-15], as longer overlapping length results in higher quality and slower inference speed. “–sample_text_guide_scale” and “–sample_audio_guide_scale” are Classify-Free-Guidance scale of text prompt and audio. The recommended range for prompt and audio cfg is [3-6]. You can increase the audio cfg to facilitate the lip synchronization with audio.

We provide 6 cases in different resolution settings in path/StableAvatar/examples for validation. ❤️❤️Please feel free to try it out and enjoy the endless entertainment of infinite-length avatar video generation❤️❤️!

💡Tips

  • Wan2.1-1.3B-based StableAvatar weights have two versions: transformer3d-square.pt and transformer3d-rec-vec.pt, which are trained on two video datasets in two different resolution settings. Two versions both support generating audio-driven avatar video at three different resolution settings: 512x512, 480x832, and 832x480. You can modify --transformer_path in inference.sh to switch these two versions.

  • If you have limited GPU resources, you can change the loading mode of StableAvatar by modifying “–GPU_memory_mode” in inference.sh. The options of “–GPU_memory_mode” are model_full_load, sequential_cpu_offload, model_cpu_offload_and_qfloat8, and model_cpu_offload. In particular, when you set --GPU_memory_mode to sequential_cpu_offload, the total GPU memory consumption is approximately 3G with slower inference speed. Setting --GPU_memory_mode to model_cpu_offload can significantly cut GPU memory usage, reducing it by roughly half compared to model_full_load mode.

  • If you have multiple Gpus, you can run Multi-GPU inference to speed up by modifying “–ulysses_degree” and “–ring_degree” in inference.sh. For example, if you have 8 GPUs, you can set --ulysses_degree=4 and --ring_degree=2. Notably, you have to ensure ulysses_degree*ring_degree=total GPU number/world-size. Moreover, you can also add --fsdp_dit in inference.sh to activate FSDP in DiT to further reduce GPU memory consumption.

The video synthesized by StableAvatar is without audio. If you want to obtain the high quality MP4 file with audio, we recommend you to leverage ffmpeg on the output_path as follows:

ffmpeg -i video_without_audio.mp4 -i /path/audio.wav -c:v copy -c:a aac -shortest /path/output_with_audio.mp4

🧱 VRAM requirement and Runtime

For the 5s video (480x832, fps=25), the basic model (–GPU_memory_mode=”model_full_load”) requires approximately 18GB VRAM and finishes in 3 minutes on a 4090 GPU.

🔥🔥Theoretically, StableAvatar is capable of synthesizing hours of video without significant quality degradation; however, the 3D VAE decoder demands significant GPU memory, especially when decoding 10k+ frames. You have the option to run the VAE decoder on CPU.🔥🔥

Contact

If you have any suggestions or find our work helpful, feel free to contact me

Email: francisshuyuan@gmail.com

If you find our work useful, please consider giving a star ⭐ to this github repository (StableAvatar) and citing it ❤️:

@article{tu2025stableavatar,
  title={StableAvatar: Infinite-Length Audio-Driven Avatar Video Generation},
  author={Tu, Shuyuan and Pan, Yueming and Huang, Yinming and Han, Xintong and Xing, Zhen and Dai, Qi and Luo, Chong and Wu, Zuxuan and Jiang Yu-Gang},
  journal={arXiv preprint arXiv:2508.08248},
  year={2025}
}