paullux/framepack-runner

FramePack video generation with image + motion prompt. Based on Stanford's 2025 model.

Public
98 runs

🎞️ FramePack Runner – Image-to-Video with Prompt

Run on Replicate

Generate short videos from a single image and a motion prompt using the FramePack architecture by Stanford (2025).

👉 Try it now on Replicate


✨ Example usage

replicate run paullux/framepack-runner \
  -v image=@input.png \
  -v prompt="A cat jumps backward in surprise" \
  -v seed=123 \
  -v steps=30 \
  -v duration_seconds=5 \
  -v fps=30

🧠 About FramePack

Packing Input Frame Contexts in Next-Frame Prediction Models for Video Generation Lvmin Zhang, Maneesh Agrawala – Stanford University, 2025

FramePack compresses temporal context into fixed-length representations, making it highly efficient for generating long video sequences.

  • 📄 Project page

  • 🧪 Uses the FramePackPipeline from 🤗 diffusers


📦 Inputs

Name Type Description
image file The input image (.png or .jpg)
prompt string A motion-focused description prompt
seed integer Random seed for reproducibility (default: 42)
steps integer Sampling steps (default: 25)
duration_seconds number Duration of the video in seconds (default: 5)
fps integer Output video frame rate (default: 30)

📤 Output

Returns an .mp4 video file composed of all generated frames.


🛠 How it works

python framepack_runner.py \
  -v image=@input.png \
  -v prompt="A cat jumps backward in surprise" \
  -v seed=123 \
  -v steps=30 \
  -v duration_seconds=5 \
  -v fps=30

All frames are saved locally and then compiled into a video using ffmpeg.

📸 Example Prompt Ideas

  • The robot jumps forward and transforms mid-air.

  • A woman spins slowly in a dark room, lit by candlelight.

  • The camera zooms toward a cat staring at a moving shadow.


📃 License

This implementation uses FramePack under its original academic license. Your usage of Replicate infrastructure is bound by their terms of service.


🙌 Credits

Based on FramePack by Lvmin Zhang & Maneesh Agrawala (Stanford).

Model created
Model updated