ThomasBaruzier commited on
Commit
e467288
1 Parent(s): ad040c7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +158 -0
README.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
+ base_model: Qwen/Qwen2.5-1.5B
8
+ tags:
9
+ - chat
10
+ ---
11
+
12
+ <hr>
13
+
14
+ # Llama.cpp imatrix quantizations of Qwen/Qwen2.5-1.5B-Instruct
15
+
16
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/646410e04bf9122922289dc7/gDUbZOu1ND0j-th4Q6tep.jpeg" alt="llama" width="60%"/>
17
+
18
+ Using llama.cpp commit [eca0fab](https://github.com/ggerganov/llama.cpp/commit/eca0fab) for quantization.
19
+
20
+ Original model: [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/meta-llama/Qwen/Qwen2.5-1.5B-Instruct)
21
+
22
+ All quants were made using the imatrix option and Bartowski's [calibration file](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8).
23
+
24
+ <hr>
25
+
26
+ # Perplexity table (the lower the better)
27
+
28
+ | Quant | Size (MB) | PPL | Size (%) | Accuracy (%) | PPL error rate |
29
+ | ------- | --------- | -------- | -------- | ------------ | -------------- |
30
+ | IQ1_S | 417 | 193.6245 | 14.13 | 5.24 | 1.77149 |
31
+ | IQ1_M | 443 | 66.9068 | 15.01 | 15.17 | 0.52878 |
32
+ | IQ2_XXS | 488 | 33.3356 | 16.54 | 30.45 | 0.25559 |
33
+ | IQ2_XS | 525 | 20.287 | 17.79 | 50.04 | 0.14936 |
34
+ | IQ2_S | 538 | 18.2927 | 18.23 | 55.49 | 0.1338 |
35
+ | IQ2_M | 574 | 15.4838 | 19.45 | 65.56 | 0.11113 |
36
+ | Q2_K_S | 611 | 16.0169 | 20.7 | 63.38 | 0.11623 |
37
+ | IQ3_XXS | 638 | 12.3935 | 21.62 | 81.91 | 0.0877 |
38
+ | Q2_K | 645 | 14.1657 | 21.86 | 71.66 | 0.10105 |
39
+ | IQ3_XS | 698 | 11.7112 | 23.65 | 86.68 | 0.08256 |
40
+ | Q3_K_S | 726 | 12.4782 | 24.6 | 81.35 | 0.08842 |
41
+ | IQ3_S | 728 | 11.4241 | 24.67 | 88.86 | 0.07977 |
42
+ | IQ3_M | 741 | 11.4058 | 25.11 | 89 | 0.07862 |
43
+ | Q3_K_M | 786 | 11.3529 | 26.64 | 89.42 | 0.08018 |
44
+ | Q3_K_L | 840 | 11.1934 | 28.46 | 90.69 | 0.07913 |
45
+ | IQ4_XS | 855 | 10.5302 | 28.97 | 96.4 | 0.07351 |
46
+ | IQ4_NL | 893 | 10.5116 | 30.26 | 96.57 | 0.07335 |
47
+ | Q4_0 | 895 | 10.8217 | 30.33 | 93.8 | 0.07576 |
48
+ | Q4_K_S | 897 | 10.5236 | 30.4 | 96.46 | 0.0736 |
49
+ | Q4_K_M | 941 | 10.4628 | 31.89 | 97.02 | 0.0731 |
50
+ | Q4_1 | 970 | 10.51 | 32.87 | 96.59 | 0.07347 |
51
+ | Q5_K_S | 1048 | 10.2715 | 35.51 | 98.83 | 0.07148 |
52
+ | Q5_0 | 1051 | 10.3196 | 35.62 | 98.37 | 0.07212 |
53
+ | Q5_K_M | 1073 | 10.2529 | 36.36 | 99.01 | 0.07143 |
54
+ | Q5_1 | 1126 | 10.2624 | 38.16 | 98.92 | 0.0714 |
55
+ | Q6_K | 1214 | 10.203 | 41.14 | 99.49 | 0.07108 |
56
+ | Q8_0 | 1571 | 10.167 | 53.24 | 99.84 | 0.07068 |
57
+ | F16 | 2951 | 10.1512 | 100 | 100 | 0.07058 |
58
+
59
+ <hr>
60
+
61
+ # Qwen2.5-1.5B-Instruct
62
+
63
+ ## Introduction
64
+
65
+ Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
66
+
67
+ - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
68
+ - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
69
+ - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
70
+ - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
71
+
72
+ **This repo contains the instruction-tuned 1.5B Qwen2.5 model**, which has the following features:
73
+ - Type: Causal Language Models
74
+ - Training Stage: Pretraining & Post-training
75
+ - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
76
+ - Number of Parameters: 1.54B
77
+ - Number of Paramaters (Non-Embedding): 1.31B
78
+ - Number of Layers: 28
79
+ - Number of Attention Heads (GQA): 12 for Q and 2 for KV
80
+ - Context Length: Full 32,768 tokens and generation 8192 tokens
81
+
82
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
83
+
84
+ ## Requirements
85
+
86
+ The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
87
+
88
+ With `transformers<4.37.0`, you will encounter the following error:
89
+ ```
90
+ KeyError: 'qwen2'
91
+ ```
92
+
93
+ ## Quickstart
94
+
95
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
96
+
97
+ ```python
98
+ from transformers import AutoModelForCausalLM, AutoTokenizer
99
+
100
+ model_name = "Qwen/Qwen2.5-1.5B-Instruct"
101
+
102
+ model = AutoModelForCausalLM.from_pretrained(
103
+ model_name,
104
+ torch_dtype="auto",
105
+ device_map="auto"
106
+ )
107
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
108
+
109
+ prompt = "Give me a short introduction to large language model."
110
+ messages = [
111
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
112
+ {"role": "user", "content": prompt}
113
+ ]
114
+ text = tokenizer.apply_chat_template(
115
+ messages,
116
+ tokenize=False,
117
+ add_generation_prompt=True
118
+ )
119
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
120
+
121
+ generated_ids = model.generate(
122
+ **model_inputs,
123
+ max_new_tokens=512
124
+ )
125
+ generated_ids = [
126
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
127
+ ]
128
+
129
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
130
+ ```
131
+
132
+
133
+ ## Evaluation & Performance
134
+
135
+ Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
136
+
137
+ For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
138
+
139
+ ## Citation
140
+
141
+ If you find our work helpful, feel free to give us a cite.
142
+
143
+ ```
144
+ @misc{qwen2.5,
145
+ title = {Qwen2.5: A Party of Foundation Models},
146
+ url = {https://qwenlm.github.io/blog/qwen2.5/},
147
+ author = {Qwen Team},
148
+ month = {September},
149
+ year = {2024}
150
+ }
151
+
152
+ @article{qwen2,
153
+ title={Qwen2 Technical Report},
154
+ author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
155
+ journal={arXiv preprint arXiv:2407.10671},
156
+ year={2024}
157
+ }
158
+ ```