alexmarques commited on
Commit
f978f64
·
verified ·
1 Parent(s): c85d356

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +256 -0
README.md ADDED
@@ -0,0 +1,256 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ license: llama3
6
+ license_link: https://llama.meta.com/llama3/license/
7
+ ---
8
+
9
+ # Meta-Llama-3-70B-Instruct-quantized.w8a8
10
+
11
+ ## Model Overview
12
+ - **Model Architecture:** Meta-Llama-3
13
+ - **Input:** Text
14
+ - **Output:** Text
15
+ - **Model Optimizations:**
16
+ - **Weight quantization:** INT8
17
+ - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct), this models is intended for assistant-like chat.
18
+ - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
19
+ - **Release Date:** 7/14/2024
20
+ - **Version:** 1.0
21
+ - **License(s):** [Llama3](https://llama.meta.com/llama3/license/)
22
+ - **Model Developers:** Neural Magic
23
+
24
+ Quantized version of [Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
25
+ It achieves an average score of 79.18 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 79.18.
26
+
27
+ ### Model Optimizations
28
+
29
+ This model was obtained by quantizing the weights of [Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) to INT8 data type.
30
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
31
+
32
+ Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT8 and floating point representations of the quantized weights.
33
+ [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) is used for quantization with 10% damping factor and 128 sequences taken from Neural Magic's [LLM compression calibration dataset](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration).
34
+
35
+
36
+ ## Deployment
37
+
38
+ ### Use with vLLM
39
+
40
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below (using 2 GPUs).
41
+
42
+ ```python
43
+ from vllm import LLM, SamplingParams
44
+ from transformers import AutoTokenizer
45
+
46
+ model_id = "neuralmagic/Meta-Llama-3-70B-Instruct-quantized.w8a16"
47
+ number_gpus = 2
48
+
49
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
50
+
51
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
52
+
53
+ messages = [
54
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
55
+ {"role": "user", "content": "Who are you?"},
56
+ ]
57
+
58
+ prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
59
+
60
+ llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
61
+
62
+ outputs = llm.generate(prompts, sampling_params)
63
+
64
+ generated_text = outputs[0].outputs[0].text
65
+ print(generated_text)
66
+ ```
67
+
68
+ vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
69
+
70
+ ### Use with transformers
71
+
72
+ This model is supported by Transformers leveraging the integration with the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) data format.
73
+ The following example contemplates how the model can be used using the `generate()` function.
74
+
75
+ ```python
76
+ from transformers import AutoTokenizer, AutoModelForCausalLM
77
+
78
+ model_id = "neuralmagic/Meta-Llama-3-70B-Instruct-quantized.w8a16"
79
+
80
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
81
+ model = AutoModelForCausalLM.from_pretrained(
82
+ model_id,
83
+ torch_dtype="auto",
84
+ device_map="auto",
85
+ )
86
+
87
+ messages = [
88
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
89
+ {"role": "user", "content": "Who are you?"},
90
+ ]
91
+
92
+ input_ids = tokenizer.apply_chat_template(
93
+ messages,
94
+ add_generation_prompt=True,
95
+ return_tensors="pt"
96
+ ).to(model.device)
97
+
98
+ terminators = [
99
+ tokenizer.eos_token_id,
100
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
101
+ ]
102
+
103
+ outputs = model.generate(
104
+ input_ids,
105
+ max_new_tokens=256,
106
+ eos_token_id=terminators,
107
+ do_sample=True,
108
+ temperature=0.6,
109
+ top_p=0.9,
110
+ )
111
+ response = outputs[0][input_ids.shape[-1]:]
112
+ print(tokenizer.decode(response, skip_special_tokens=True))
113
+ ```
114
+
115
+ ## Creation
116
+
117
+ This model was created by applying the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) library as presented in the code snipet below.
118
+ Although AutoGPTQ was used for this particular model, Neural Magic is transitioning to using [llm-compressor](https://github.com/vllm-project/llm-compressor) which supports several quantization schemes and models not supported by AutoGPTQ.
119
+
120
+ ```python
121
+ from transformers import AutoTokenizer
122
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
123
+ from datasets import load_dataset
124
+
125
+ model_id = "meta-llama/Meta-Llama-3-70B-Instruct"
126
+
127
+ num_samples = 128
128
+ max_seq_len = 8192
129
+
130
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
131
+
132
+ def preprocess_fn(example):
133
+ return {"text": tokenizer.apply_chat_template(example["messages"], add_generation_prompt=False, tokenize=False)}
134
+
135
+ ds = load_dataset("neuralmagic/LLM_compression_calibration", split="train")
136
+ ds = ds.shuffle().select(range(num_samples))
137
+ ds = ds.map(preprocess_fn)
138
+
139
+ examples = [tokenizer(example["text"], padding=False, max_length=max_seq_len, truncation=True) for example in ds]
140
+
141
+ quantize_config = BaseQuantizeConfig(
142
+ bits=8,
143
+ group_size=-1,
144
+ desc_act=False,
145
+ model_file_base_name="model",
146
+ damp_percent=0.1,
147
+ )
148
+
149
+ model = AutoGPTQForCausalLM.from_pretrained(
150
+ model_id,
151
+ quantize_config,
152
+ device_map="auto",
153
+ )
154
+
155
+ model.quantize(examples)
156
+ model.save_pretrained("Meta-Llama-3-70B-Instruct-quantized.w8a8")
157
+ ```
158
+
159
+
160
+
161
+ ## Evaluation
162
+
163
+ The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command (using 2 GPUs):
164
+ ```
165
+ lm_eval \
166
+ --model vllm \
167
+ --model_args pretrained="neuralmagic/Meta-Llama-3-70B-Instruct-quantized.w8a8",tensor_parallel_size=2,dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096 \
168
+ --tasks openllm \
169
+ --batch_size auto
170
+ ```
171
+
172
+ ### Accuracy
173
+
174
+ #### Open LLM Leaderboard evaluation scores
175
+ <table>
176
+ <tr>
177
+ <td><strong>Benchmark</strong>
178
+ </td>
179
+ <td><strong>Meta-Llama-3-70B-Instruct </strong>
180
+ </td>
181
+ <td><strong>Meta-Llama-3-70B-Instruct-quantized.w8a16 (this model)</strong>
182
+ </td>
183
+ <td><strong>Recovery</strong>
184
+ </td>
185
+ </tr>
186
+ <tr>
187
+ <td>MMLU (5-shot)
188
+ </td>
189
+ <td>80.18
190
+ </td>
191
+ <td>79.41
192
+ </td>
193
+ <td>99.0%
194
+ </td>
195
+ </tr>
196
+ <tr>
197
+ <td>ARC Challenge (25-shot)
198
+ </td>
199
+ <td>72.44
200
+ </td>
201
+ <td>72.61
202
+ </td>
203
+ <td>100.2%
204
+ </td>
205
+ </tr>
206
+ <tr>
207
+ <td>GSM-8K (5-shot, strict-match)
208
+ </td>
209
+ <td>90.83
210
+ </td>
211
+ <td>92.27
212
+ </td>
213
+ <td>101.6%
214
+ </td>
215
+ </tr>
216
+ <tr>
217
+ <td>Hellaswag (10-shot)
218
+ </td>
219
+ <td>85.54
220
+ </td>
221
+ <td>85.75
222
+ </td>
223
+ <td>100.2%
224
+ </td>
225
+ </tr>
226
+ <tr>
227
+ <td>Winogrande (5-shot)
228
+ </td>
229
+ <td>83.19
230
+ </td>
231
+ <td>82.56
232
+ </td>
233
+ <td>99.2%
234
+ </td>
235
+ </tr>
236
+ <tr>
237
+ <td>TruthfulQA (0-shot)
238
+ </td>
239
+ <td>62.92
240
+ </td>
241
+ <td>62.48
242
+ </td>
243
+ <td>99.3%
244
+ </td>
245
+ </tr>
246
+ <tr>
247
+ <td><strong>Average</strong>
248
+ </td>
249
+ <td><strong>79.18</strong>
250
+ </td>
251
+ <td><strong>79.18</strong>
252
+ </td>
253
+ <td><strong>100.0%</strong>
254
+ </td>
255
+ </tr>
256
+ </table>