zsxkib / moore-animateanyone

Unofficial Re-Trained AnimateAnyone (Image + DWPose Video → Animated Video of Image)

  • Public
  • 801 runs
  • GitHub
  • Paper
  • License

Input

Output

Run time and cost

This model costs approximately $0.18 to run on Replicate, or 5 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 5 minutes. The predict time for this model varies significantly based on the inputs.

Readme

📜 Model Description

Picture of Someone + DWpose Movement Clip → Animated Clip with the Person Moving as in DWpose

Moore-AnimateAnyone is a machine learning model 🤖 that works like the AnimateAnyone project. It’s built to make still pictures 🖼️ move using a DWPose estimation video. It uses a lot of different methods to get close to the quality shown in the original AnimateAnyone study. This model isn’t finished yet; it’s a basic version that almost matches what AnimateAnyone can do 🎯.

🎯 Intended Use

This model is made for academic research 🔬 and development, and it’s covered by the Apache License 2.0. You can use it to bring still pictures to life, which is great for creating animations 🎥, making video games 🎮, and other cool stuff. You can also train the model with your own pictures to make personalized animations. Just make sure you follow the Apache License rules when you use it.

🤔 Ethical Considerations

This project is made for academic study and the people who made it aren’t responsible for how others use it 🚫. If you create something using this tool, it’s on you. The folks behind the project don’t have any legal ties or responsibility for what you do. Remember to use this tool the right way and follow the rules ⚖️.

⚠️ Caveats and Recommendations

This model is powerful and can do a lot, but there are some things to keep in mind:

  1. Sometimes the background might get a bit messy 🎨 if the picture you’re animating is super clean and plain.
  2. If your picture and the motion points (keypoints) don’t match up in size, things might not look right 🔍. We haven’t got all the tricks from the main paper into the mix yet.
  3. If the moves are really small or if nothing’s moving much, you might notice some shaking or twitching 📉.

We’re on it, though, and planning to make it better soon 🔜. If you’ve got suggestions or ideas 💡, we’d love to hear them to make the model even cooler.