lucataco / videollama3-7b

VideoLLaMA 3: Frontier Multimodal Foundation Models for Video Understanding

  • Public
  • 226 runs
  • GitHub
  • Weights
  • Paper
  • License

Run time and cost

This model costs approximately $0.0014 to run on Replicate, or 714 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 2 seconds.

Readme

VideoLLaMA 3: Frontier Multimodal Foundation Models for Video Understanding

If you like our project, please give us a star ⭐ on Github for the latest update.

📰 News

🌟 Introduction

VideoLLaMA 3 represents a state-of-the-art series of multimodal foundation models designed to excel in both image and video understanding tasks. Leveraging advanced architectures, VideoLLaMA 3 demonstrates exceptional capabilities in processing and interpreting visual content across various contexts. These models are specifically designed to address complex multimodal challenges, such as integrating textual and visual information, extracting insights from sequential video data, and performing high-level reasoning over both dynamic and static visual scenes.

🌎 Model Zoo

Model Base Model HF Link
VideoLLaMA3-7B (This Checkpoint) Qwen2.5-7B DAMO-NLP-SG/VideoLLaMA3-7B
VideoLLaMA3-2B Qwen2.5-1.5B DAMO-NLP-SG/VideoLLaMA3-2B
VideoLLaMA3-7B-Image Qwen2.5-7B DAMO-NLP-SG/VideoLLaMA3-7B-Image
VideoLLaMA3-2B-Image Qwen2.5-1.5B DAMO-NLP-SG/VideoLLaMA3-2B-Image

We also upload the tuned vision encoder of VideoLLaMA3-7B for wider application:

Model Base Model HF Link
VideoLLaMA3-7B Vision Encoder siglip-so400m-patch14-384 DAMO-NLP-SG/VL3-SigLIP-NaViT

🚀 Main Results

image

  • * denotes the reproduced results.

Citation

If you find VideoLLaMA useful for your research and applications, please cite using this BibTeX:

@article{damonlpsg2025videollama3,
  title={VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding},
  author={Boqiang Zhang, Kehan Li, Zesen Cheng, Zhiqiang Hu, Yuqian Yuan, Guanzheng Chen, Sicong Leng, Yuming Jiang, Hang Zhang, Xin Li, Peng Jin, Wenqi Zhang, Fan Wang, Lidong Bing, Deli Zhao},
  journal={arXiv preprint arXiv:2501.13106},
  year={2025},
  url = {https://arxiv.org/abs/2501.13106}
}

@article{damonlpsg2024videollama2,
  title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs},
  author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong},
  journal={arXiv preprint arXiv:2406.07476},
  year={2024},
  url = {https://arxiv.org/abs/2406.07476}
}

@article{damonlpsg2023videollama,
  title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding},
  author = {Zhang, Hang and Li, Xin and Bing, Lidong},
  journal = {arXiv preprint arXiv:2306.02858},
  year = {2023},
  url = {https://arxiv.org/abs/2306.02858}
}