POINTS-Qwen-2-5-7B-Chat

Introduction

We are excited to announce the first version of POINTS, which integrates recent advancement in vision-language model and new techniques proposed by researchers from WeChat AI.

๐Ÿ  Github   |    ๐Ÿ“‘ Paper   

What's new in POINTS?

Key Innovations

  1. Strong Baseline: We integrate the most recent advancement in vision-language model, i.e., CapFusion, Dual Vision Encoder, and Dynamic High Resolution, into POINTS.

  2. Pre-training Dataset Filtering: We propose to filter the pre-training dataset using perplexity as a metric. Utilizing this filtering strategy, we can significantly reduce the size of the pre-training dataset and improve the performance of the model.

  3. Model Soup: We propose to apply model soup to models, fine-tuned with different visual instruction tuning datasets, which can further significantly improve the performance of the model.

How to use POINTS?

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import CLIPImageProcessor
from PIL import Image
import torch
import requests
from io import BytesIO


image_url = 'https://github.com/user-attachments/assets/83258e94-5d61-48ef-a87f-80dd9d895524'
response = requests.get(image_url)
image_data = BytesIO(response.content)
pil_image = Image.open(image_data)
prompt = 'please describe the image in detail'
model_path = 'WePOINTS/POINTS-Qwen-2-5-7B-Chat'
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path, trust_remote_code=True, device_map='cuda').to(torch.bfloat16)
image_processor = CLIPImageProcessor.from_pretrained(model_path)
generation_config = {
    'max_new_tokens': 1024,
    'temperature': 0.0,
    'top_p': 0.0,
    'num_beams': 1,
}
res = model.chat(
    pil_image,
    prompt,
    tokenizer,
    image_processor,
    True,
    generation_config
)
print(res)

Evaluation

Benchmark InternVL2-8B LLaVA-OneVision POINTS
MMBench-dev-en - 80.8 83.2
MathVista 58.3 62.3 63.1
HallucinationBench 45.0 31.6 46.0
OCRBench 79.4 62.2 72.0
AI2D 83.6 82.4 80.9
MMVet 54.3 51.9 52.3
MMStar 61.5 61.9 61.0
MMMU 51.2 47.9 49.4
ScienceQA 97.1 95.4 -
MME 2215.1 1993.6 2195.2
RealWorldQA 64.2 69.9 67.3
LLaVA-Wild 73.3 81.0 71.1

Citation

If you find our work helpful, feel free to cite us:

@article{liu2024points,
  title={POINTS: Improving Your Vision-language Model with Affordable Strategies},
  author={Liu, Yuan and Zhao, Zhongyin and Zhuang, Ziyuan and Tian, Le and Zhou, Xiao and Zhou, Jie},
  journal={arXiv preprint arXiv:2409.04828},
  year={2024}
}

@article{liu2024rethinking,
  title={Rethinking Overlooked Aspects in Vision-Language Models},
  author={Liu, Yuan and Tian, Le and Zhou, Xiao and Zhou, Jie},
  journal={arXiv preprint arXiv:2405.11850},
  year={2024}
}
Downloads last month
185
Safetensors
Model size
8.25B params
Tensor type
BF16
ยท
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for WePOINTS/POINTS-Qwen-2-5-7B-Chat

Base model

Qwen/Qwen2.5-7B
Finetuned
(151)
this model

Dataset used to train WePOINTS/POINTS-Qwen-2-5-7B-Chat

Space using WePOINTS/POINTS-Qwen-2-5-7B-Chat 1