Add link to paper and Github repo

#1
by nielsr HF staff - opened
Files changed (1) hide show
  1. README.md +13 -86
README.md CHANGED
@@ -1,15 +1,14 @@
1
-
2
  ---
3
- license_name: qwen-research
4
- license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct-AWQ/blob/main/LICENSE
5
  language:
6
  - en
 
 
 
7
  pipeline_tag: image-text-to-text
8
  tags:
9
  - multimodal
10
- library_name: transformers
11
- base_model:
12
- - Qwen/Qwen2.5-VL-3B-Instruct
13
  ---
14
 
15
  # Qwen2.5-VL-3B-Instruct-AWQ
@@ -51,6 +50,11 @@ We enhance both training and inference speeds by strategically implementing wind
51
 
52
  We have three models with 3, 7 and 72 billion parameters. This repo contains the instruction-tuned 3B Qwen2.5-VL model with AWQ. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL).
53
 
 
 
 
 
 
54
 
55
 
56
  ## Evaluation
@@ -86,7 +90,7 @@ KeyError: 'qwen2_5_vl'
86
  We offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
87
 
88
  ```bash
89
- # It's highly recommanded to use `[decord]` feature for faster video loading.
90
  pip install qwen-vl-utils[decord]==0.0.8
91
  ```
92
 
@@ -97,7 +101,7 @@ If you are not using Linux, you might not be able to install `decord` from PyPI.
97
  Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
98
 
99
  ```python
100
- from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
101
  from qwen_vl_utils import process_vision_info
102
 
103
  # default: Load the model on the available device(s)
@@ -113,7 +117,7 @@ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
113
  # device_map="auto",
114
  # )
115
 
116
- # default processer
117
  processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct-AWQ")
118
 
119
  # The default range for the number of visual tokens per image in the model is 4-16384.
@@ -163,7 +167,6 @@ print(output_text)
163
  ### 🤖 ModelScope
164
  We strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints.
165
 
166
-
167
  ### More Usage Tips
168
 
169
  For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
@@ -250,80 +253,4 @@ messages = [
250
  ],
251
  }
252
  ]
253
- ```
254
-
255
- ### Processing Long Texts
256
-
257
- The current `config.json` is set for context length up to 32,768 tokens.
258
- To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
259
-
260
- For supported frameworks, you could add the following to `config.json` to enable YaRN:
261
-
262
- ```
263
- {
264
- ...,
265
- "type": "yarn",
266
- "mrope_section": [
267
- 16,
268
- 24,
269
- 24
270
- ],
271
- "factor": 4,
272
- "original_max_position_embeddings": 32768
273
- }
274
- ```
275
-
276
- However, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use.
277
-
278
- At the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.
279
-
280
- ### Benchmark
281
- #### Performance of Quantized Models
282
- This section reports the generation performance of quantized models (including GPTQ and AWQ) of the Qwen2.5-VL series. Specifically, we report:
283
-
284
- - MMMU_VAL (Accuracy)
285
- - DocVQA_VAL (Accuracy)
286
- - MMBench_DEV_EN (Accuracy)
287
- - MathVista_MINI (Accuracy)
288
-
289
- We use [VLMEvalkit](https://github.com/open-compass/VLMEvalKit) to evaluate all models.
290
-
291
- | Model Size | Quantization | MMMU_VAL | DocVQA_VAL | MMBench_EDV_EN | MathVista_MINI |
292
- | --- | --- | --- | --- | --- | --- |
293
- | Qwen2.5-VL-72B-Instruct | BF16<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct)[🤖](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct)) | 70.0 | 96.1 | 88.2 | 75.3 |
294
- | | AWQ<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct-AWQ)[🤖](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct-AWQ)) | 69.1 | 96.0 | 87.9 | 73.8 |
295
- | Qwen2.5-VL-7B-Instruct | BF16<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)[🤖](https://modelscope.cn/models/qwen/Qwen2.5-VL-7B-Instruct)) | 58.4 | 94.9 | 84.1 | 67.9 |
296
- | | AWQ<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct-AWQ)[🤖](https://modelscope.cn/models/qwen/Qwen2.5-VL-7B-Instruct-AWQ)) | 55.6 | 94.6 | 84.2 | 64.7 |
297
- | Qwen2.5-VL-3B-Instruct | BF16<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct)[🤖](https://modelscope.cn/models/qwen/Qwen2.5-VL-3B-Instruct)) | 51.7 | 93.0 | 79.8 | 61.4 |
298
- | | AWQ<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct-AWQ)[🤖](https://modelscope.cn/models/qwen/Qwen2.5-VL-3B-Instruct-AWQ)) | 49.1 | 91.8 | 78.0 | 58.8 |
299
-
300
-
301
-
302
-
303
- ## Citation
304
-
305
- If you find our work helpful, feel free to give us a cite.
306
-
307
- ```
308
- @misc{qwen2.5-VL,
309
- title = {Qwen2.5-VL},
310
- url = {https://qwenlm.github.io/blog/qwen2.5-vl/},
311
- author = {Qwen Team},
312
- month = {January},
313
- year = {2025}
314
- }
315
-
316
- @article{Qwen2VL,
317
- title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
318
- author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
319
- journal={arXiv preprint arXiv:2409.12191},
320
- year={2024}
321
- }
322
-
323
- @article{Qwen-VL,
324
- title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
325
- author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
326
- journal={arXiv preprint arXiv:2308.12966},
327
- year={2023}
328
- }
329
  ```
 
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-VL-3B-Instruct
4
  language:
5
  - en
6
+ library_name: transformers
7
+ license_name: qwen-research
8
+ license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct-AWQ/blob/main/LICENSE
9
  pipeline_tag: image-text-to-text
10
  tags:
11
  - multimodal
 
 
 
12
  ---
13
 
14
  # Qwen2.5-VL-3B-Instruct-AWQ
 
50
 
51
  We have three models with 3, 7 and 72 billion parameters. This repo contains the instruction-tuned 3B Qwen2.5-VL model with AWQ. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL).
52
 
53
+ See also:
54
+
55
+ - [Project website](https://chat.qwenlm.ai/)
56
+ - [Technical report](https://huggingface.co/papers/2502.13923)
57
+ - [Github](https://github.com/QwenLM/Qwen2.5-VL)
58
 
59
 
60
  ## Evaluation
 
90
  We offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
91
 
92
  ```bash
93
+ # It's highly recommended to use `[decord]` feature for faster video loading.
94
  pip install qwen-vl-utils[decord]==0.0.8
95
  ```
96
 
 
101
  Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
102
 
103
  ```python
104
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
105
  from qwen_vl_utils import process_vision_info
106
 
107
  # default: Load the model on the available device(s)
 
117
  # device_map="auto",
118
  # )
119
 
120
+ # default processor
121
  processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct-AWQ")
122
 
123
  # The default range for the number of visual tokens per image in the model is 4-16384.
 
167
  ### 🤖 ModelScope
168
  We strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints.
169
 
 
170
  ### More Usage Tips
171
 
172
  For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
 
253
  ],
254
  }
255
  ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
256
  ```