File size: 1,320 Bytes
0ffc710
 
8128083
 
 
fb111f6
0ffc710
 
fb111f6
0ffc710
55d09d8
0ffc710
55d09d8
0ffc710
 
 
 
 
fb111f6
 
 
0ffc710
 
fb111f6
0ffc710
622433a
 
 
 
0ffc710
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: apache-2.0
inference: false
base_model: Qwen/Qwen2-VL-7B-Instruct
base_model_relation: quantized 
tags: [green, llmware-vision, p7, ov, emerald]
---

# qwen2-vl-7b-instruct-ov

**qwen2-vl-7b-instruct-ov** is an OpenVino int4 quantized version of [Qwen2-VL-7B-Instruct](https://www.huggingface.co/Qwen/Qwen2-VL-7B-Instruct), providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.    

This is a multi-modal vision-to-text model from the Qwen2 release series from Qwen (summer 2024), and is a very high-quality innovative model that accepts multi-modal inputs (image/video, text).

### Model Description

- **Developed by:** Qwen
- **Quantized by:** llmware  
- **Model type:** qwen2-vl
- **Parameters:** 7 billion  
- **Model Parent:** Qwen/Qwen2-VL-7B-Instruct  
- **Language(s) (NLP):** English  
- **License:** Apache 2.0  
- **Uses:** Multi-Modal LLM  
- **Quantization:** int4  


For an open source inference implementation, please see this [Intel OpenVino notebook](https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/qwen2-vl)  


## Model Card Contact  

[llmware on github](https://www.github.com/llmware-ai/llmware) 

[llmware on hf](https://www.huggingface.co/llmware)  

[llmware website](https://www.llmware.ai)