abhinavkulkarni commited on
Commit
d0e693a
1 Parent(s): 98f34f3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +108 -0
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-3.0
3
+ tags:
4
+ - MosaicML
5
+ - AWQ
6
+ inference: false
7
+ ---
8
+
9
+ # MPT-7B-Instruct (4-bit 128g AWQ Quantized)
10
+ [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) is a model for short-form instruction following.
11
+
12
+ This model is a 4-bit 128 group size AWQ quantized model. For more information about AWQ quantization, please click [here](https://github.com/mit-han-lab/llm-awq).
13
+
14
+ ## Model Date
15
+
16
+ July 5, 2023
17
+
18
+ ## Model License
19
+
20
+ Please refer to original MPT model license ([link](https://huggingface.co/mosaicml/mpt-7b-instruct)).
21
+
22
+ Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/blob/main/LICENSE)).
23
+
24
+ ## How to Use
25
+
26
+ ```bash
27
+ git clone https://github.com/mit-han-lab/llm-awq \
28
+ && cd llm-awq \
29
+ && git checkout 71d8e68df78de6c0c817b029a568c064bf22132d \
30
+ && pip install -e .
31
+ ```
32
+
33
+ ```python
34
+ import torch
35
+ from awq.quantize.quantizer import real_quantize_model_weight
36
+ from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer
37
+ from accelerate import init_empty_weights, load_checkpoint_and_dispatch
38
+ from huggingface_hub import hf_hub_download
39
+
40
+ model_name = "mosaicml/mpt-7b-instruct"
41
+
42
+ # Config
43
+ config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
44
+
45
+ # Tokenizer
46
+ tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name)
47
+
48
+ # Model
49
+ w_bit = 4
50
+ q_config = {
51
+ "zero_point": True,
52
+ "q_group_size": 128,
53
+ }
54
+
55
+ load_quant = hf_hub_download('abhinavkulkarni/mpt-7b-instruct-w4-g128-awq', 'pytorch_model.bin')
56
+
57
+ with init_empty_weights():
58
+ model = AutoModelForCausalLM.from_pretrained(model_name, config=config,
59
+ torch_dtype=torch.float16, trust_remote_code=True)
60
+
61
+ real_quantize_model_weight(model, w_bit=w_bit, q_config=q_config, init_only=True)
62
+
63
+ model = load_checkpoint_and_dispatch(model, load_quant, device_map="balanced")
64
+
65
+ # Inference
66
+ prompt = f'''What is the difference between nuclear fusion and fission?
67
+ ###Response:'''
68
+
69
+ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda()
70
+ output = model.generate(
71
+ inputs=input_ids,
72
+ temperature=0.7,
73
+ max_new_tokens=512,
74
+ top_p=0.15,
75
+ top_k=0,
76
+ repetition_penalty=1.1,
77
+ eos_token_id=tokenizer.eos_token_id
78
+ )
79
+ print(tokenizer.decode(output[0]))
80
+ ```
81
+
82
+ ## Acknowledgements
83
+
84
+ The MPT model was originally finetuned by Sam Havens and the MosaicML NLP team. Please cite this model using the following format:
85
+
86
+ ```
87
+ @online{MosaicML2023Introducing,
88
+ author = {MosaicML NLP Team},
89
+ title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
90
+ year = {2023},
91
+ url = {www.mosaicml.com/blog/mpt-7b},
92
+ note = {Accessed: 2023-03-28}, % change this date
93
+ urldate = {2023-03-28} % change this date
94
+ }
95
+ ```
96
+
97
+
98
+ The model was quantized with AWQ technique. If you find AWQ useful or relevant to your research, please kindly cite the paper:
99
+
100
+ ```
101
+ @article{lin2023awq,
102
+ title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
103
+ author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song},
104
+ journal={arXiv},
105
+ year={2023}
106
+ }
107
+ ```
108
+