VL3-SigLIP-NaViT / README.md
Cyril666's picture
Update README.md
ed2c154 verified
metadata
library_name: transformers
tags:
  - visual-encoder
  - multi-modal-large-language-model
license: apache-2.0
language:
  - en
base_model:
  - google/siglip-so400m-patch14-384
pipeline_tag: image-feature-extraction

VideoLLaMA 3: Frontier Multimodal Foundation Models for Video Understanding

If you like our project, please give us a star ⭐ on Github for the latest update.

🌟 Introduction

This model serves as the visual encoder in VideoLLaMA3.

VideoLLaMA3 leverages the Any-resolution Vision Tokenization (AVT) approach to dynamically process images and videos of varying resolutions. This is accomplished by adapting the pre-trained vision encoder (based on ViT architecture) to use 2D-RoPE (Rotary Position Embeddings), replacing the absolute position embeddings traditionally used in ViT.

With AVT, VideoLLaMA3 is able to represent images and videos with greater detail across different resolutions, enriching the vision tokens with more information. To ensure seamless integration with AVT, we fine-tune both the vision encoder and the projector during the Vision Encoder Adaptation stage (Stage #1 in the VideoLLaMA3 training pipeline) using scene images, document data, and scene images with text.

Before training, the model parameters and architecture are initialized from SigLip.

πŸš€ Model Porfermance

Base Model GQA AI2D ChartQA DocVQAval MME
clip-vit-large-patch14-336 61.50 56.28 18.32 24.86 1668.41
dfn5B-clip-vit-h-14-378 62.70 56.87 16.40 23.09 1665.35
siglip-so400m-patch14-384 This Implementation 62.92 57.12 22.44 31.32 1667.92
  • A more detailed analysis can be found in our paper.

πŸ€– Quick Start

import torch
from transformers import AutoModelForCausalLM, AutoProcessor, AutoModel, AutoImageProcessor

model_name = "DAMO-NLP-SG/VL3-SigLIP-NaViT"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    trust_remote_code=True,
    device_map="auto",
    torch_dtype=torch.bfloat16,
    attn_implementation="flash_attention_2",
)
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)

# Video conversation
conversation = [
    {"role": "system", "content": "You are a helpful assistant."},
    {
        "role": "user",
        "content": [
            {"type": "video", "data": {"video_path": "https://github.com/DAMO-NLP-SG/VideoLLaMA3/raw/refs/heads/main/assets/cat_and_chicken.mp4", "fps": 1, "max_frames": 128}},
            {"type": "text", "data": "What is the cat doing?"},
        ]
    },
]

inputs = processor(conversation=conversation, return_tensors="pt")
inputs = {k: v.cuda() if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
if "pixel_values" in inputs:
    inputs["pixel_values"] = inputs["pixel_values"].to(torch.bfloat16)
output_ids = model.generate(**inputs, max_new_tokens=128)
response = processor.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
print(response)

Citation

If you find VideoLLaMA useful for your research and applications, please cite using this BibTeX:

@article{damonlpsg2025videollama3,
  title={VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding},
  author={Boqiang Zhang, Kehan Li, Zesen Cheng, Zhiqiang Hu, Yuqian Yuan, Guanzheng Chen, Sicong Leng, Yuming Jiang, Hang Zhang, Xin Li, Peng Jin, Wenqi Zhang, Fan Wang, Lidong Bing, Deli Zhao},
  journal={arXiv preprint arXiv:2501.xxxxx},
  year={2025},
  url = {https://arxiv.org/abs/2501.xxxxx}
}

@article{damonlpsg2024videollama2,
  title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs},
  author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong},
  journal={arXiv preprint arXiv:2406.07476},
  year={2024},
  url = {https://arxiv.org/abs/2406.07476}
}

@article{damonlpsg2023videollama,
  title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding},
  author = {Zhang, Hang and Li, Xin and Bing, Lidong},
  journal = {arXiv preprint arXiv:2306.02858},
  year = {2023},
  url = {https://arxiv.org/abs/2306.02858}
}