maddes8cht commited on
Commit
5a091b4
1 Parent(s): c270441

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +305 -0
README.md ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-3.0
3
+ datasets:
4
+ - competition_math
5
+ - conceptofmind/cot_submix_original/cot_gsm8k
6
+ - knkarthick/dialogsum
7
+ - mosaicml/dolly_hhrlhf
8
+ - duorc
9
+ - tau/scrolls/qasper
10
+ - emozilla/quality
11
+ - scrolls/summ_screen_fd
12
+ - spider
13
+ tags:
14
+ - Composer
15
+ - MosaicML
16
+ - llm-foundry
17
+ inference: false
18
+ ---
19
+ [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
20
+
21
+ I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
22
+
23
+ # mpt-30b-instruct - GGUF
24
+ - Model creator: [mosaicml](https://huggingface.co/mosaicml)
25
+ - Original model: [mpt-30b-instruct](https://huggingface.co/mosaicml/mpt-30b-instruct)
26
+
27
+ # Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
28
+
29
+ As noted on the [Llama.cpp GitHub repository](https://github.com/ggerganov/llama.cpp#hot-topics), all new Llama.cpp releases after October 18, 2023, will require a re-quantization due to the new BPE tokenizer.
30
+
31
+ **Good news!** I am glad that my re-quantization process for Falcon Models is nearly complete. Download the latest quantized models to ensure compatibility with recent llama.cpp software.
32
+
33
+ **Key Points:**
34
+
35
+ - **Stay Informed:** Keep an eye on software application release schedules using llama.cpp libraries.
36
+ - **Monitor Upload Times:** Re-quantization is *almost* done. Watch for updates on my Hugging Face Model pages.
37
+
38
+ **Important Compatibility Note:** Old software will work with old Falcon models, but expect updated software to exclusively support the new models.
39
+
40
+ This change primarily affects **Falcon** and **Starcoder** models, with other models remaining unaffected.
41
+
42
+
43
+
44
+
45
+ # About GGUF format
46
+
47
+ `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
48
+ A growing list of Software is using it and can therefore use this model.
49
+ The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
50
+
51
+ # Quantization variants
52
+
53
+ There is a bunch of quantized files available. How to choose the best for you:
54
+
55
+ # Legacy quants
56
+
57
+ Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
58
+ Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
59
+ Falcon 7B models cannot be quantized to K-quants.
60
+
61
+ # K-quants
62
+
63
+ K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance.
64
+ So, if possible, use K-quants.
65
+ With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
66
+
67
+
68
+
69
+
70
+ ---
71
+
72
+ # Original Model Card:
73
+ # MPT-30B-Instruct
74
+
75
+ MPT-30B-Instruct is a model for short-form instruction following.
76
+ It is built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider).
77
+ * License: _CC-By-SA-3.0_
78
+
79
+
80
+ This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
81
+
82
+ ## Model Date
83
+
84
+ June 22, 2023
85
+
86
+ ## Model License
87
+
88
+ CC-By-SA-3.0
89
+
90
+ ## Documentation
91
+
92
+ * [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
93
+ * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
94
+ * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
95
+
96
+ ### Example Question/Instruction
97
+
98
+ **Bespokenizer46**
99
+ > I need to convince my friend, Phyllis, that she should train a custom LLM for her Fortune 500 company using the MosaicML Platform.
100
+ > Please write an email that explains why MosaicML's emphasis on cutting edge methodology, data privacy, and efficiency are so important.
101
+ > End the email with a friendly inquiry about Phyllis's family.
102
+
103
+ **MPT-30B-Instruct**:
104
+ > Phyllis -
105
+ > I know you're always looking for ways to improve efficiency at work and I think I've found a solution that you'll be very interested in.
106
+ > LLMs are only as good as the data they're trained on and the training process can be very time consuming and resource intensive. MosaicML uses cutting edge methodology to efficiently train high quality models without compromising data privacy.
107
+ > They also provide tools to easily connect to and use the model in your daily workflow.
108
+ > I think you'd really enjoy speaking with their founder, we can set up a call if you're interested.
109
+ > Also, I know it's been a tough year for your family, how are things?
110
+
111
+ > Best,
112
+ > Your Friend
113
+
114
+ ## How to Use
115
+
116
+ Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
117
+
118
+ It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
119
+
120
+ ```python
121
+ import transformers
122
+ model = transformers.AutoModelForCausalLM.from_pretrained(
123
+ 'mosaicml/mpt-30b-instruct',
124
+ trust_remote_code=True
125
+ )
126
+ ```
127
+
128
+ To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
129
+ ```python
130
+ import torch
131
+ import transformers
132
+
133
+ name = 'mosaicml/mpt-30b-instruct'
134
+
135
+ config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
136
+ config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
137
+ config.init_device = 'cuda:0' # For fast initialization directly on GPU!
138
+
139
+ model = transformers.AutoModelForCausalLM.from_pretrained(
140
+ name,
141
+ config=config,
142
+ torch_dtype=torch.bfloat16, # Load model weights in bfloat16
143
+ trust_remote_code=True
144
+ )
145
+ ```
146
+
147
+ The model was trained initially on a sequence length of 2048. An additional pre-training phase was included for sequence length adaptation to 8192. However, ALiBi further enables users to increase the maximum sequence length during finetuning and/or inference. For example:
148
+
149
+ ```python
150
+ import transformers
151
+
152
+ name = 'mosaicml/mpt-30b-instruct'
153
+
154
+ config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
155
+ config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
156
+
157
+ model = transformers.AutoModelForCausalLM.from_pretrained(
158
+ name,
159
+ config=config,
160
+ trust_remote_code=True
161
+ )
162
+ ```
163
+
164
+ This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens.
165
+
166
+ ```python
167
+ from transformers import AutoTokenizer
168
+ tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
169
+ ```
170
+
171
+ The model can then be used, for example, within a text-generation pipeline.
172
+ Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
173
+
174
+ ```python
175
+ from transformers import pipeline
176
+
177
+ with torch.autocast('cuda', dtype=torch.bfloat16):
178
+ inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
179
+ outputs = model.generate(**inputs, max_new_tokens=100)
180
+ print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
181
+
182
+ # or using the HF pipeline
183
+ pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
184
+ with torch.autocast('cuda', dtype=torch.bfloat16):
185
+ print(
186
+ pipe('Here is a recipe for vegan banana bread:\n',
187
+ max_new_tokens=100,
188
+ do_sample=True,
189
+ use_cache=True))
190
+ ```
191
+
192
+ ### Formatting
193
+
194
+ This model was trained on data formatted as follows:
195
+
196
+ ```python
197
+ def format_prompt(instruction):
198
+ template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###Instruction\n{instruction}\n\n### Response\n"
199
+ return template.format(instruction=instruction)
200
+
201
+ example = "Tell me a funny joke.\nDon't make it too funny though."
202
+ fmt_ex = format_prompt(instruction=example)
203
+ ```
204
+
205
+ In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
206
+
207
+ ## Model Description
208
+
209
+ The architecture is a modification of a standard decoder-only transformer.
210
+
211
+ The model has been modified from a standard transformer in the following ways:
212
+ * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
213
+ * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
214
+ * It does not use biases
215
+
216
+
217
+ | Hyperparameter | Value |
218
+ |----------------|-------|
219
+ |n_parameters | 29.95B |
220
+ |n_layers | 48 |
221
+ | n_heads | 64 |
222
+ | d_model | 7168 |
223
+ | vocab size | 50432 |
224
+ | sequence length | 8192 |
225
+
226
+ ## Data Mix
227
+
228
+ The model was trained on the following data mix:
229
+
230
+ | Data Source | Number of Tokens in Source | Proportion |
231
+ |-------------|----------------------------|------------|
232
+ | competition_math | 1.6 M | 3.66% |
233
+ | cot_gsm8k | 3.36 M | 7.67% |
234
+ | dialogsum | 0.1 M | 0.23% |
235
+ | dolly_hhrlhf | 5.89 M | 13.43% |
236
+ | duorc | 7.8 M | 17.80% |
237
+ | qasper | 8.72 M | 19.90% |
238
+ | quality | 11.29 M | 25.78% |
239
+ | scrolls/summ_screen_fd | 4.97 M | 11.33% |
240
+ | spider | 0.089 M | 0.20% |
241
+
242
+ ## PreTraining Data
243
+
244
+ For more details on the pretraining process, see [MPT-30B](https://huggingface.co/mosaicml/mpt-30b).
245
+
246
+ The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
247
+
248
+ ### Training Configuration
249
+
250
+ This model was trained on 72 A100 40GB GPUs for 8 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
251
+ The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
252
+
253
+ ## Limitations and Biases
254
+
255
+ _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
256
+
257
+ MPT-30B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
258
+ MPT-30B-Instruct was trained on various public datasets.
259
+ While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
260
+
261
+
262
+ ## Acknowledgements
263
+
264
+ This model was finetuned by Sam Havens, Alex Trott, and the MosaicML NLP team
265
+
266
+ ## MosaicML Platform
267
+
268
+ If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
269
+
270
+ ## Disclaimer
271
+
272
+ The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
273
+
274
+ ## Citation
275
+
276
+ Please cite this model using the following format:
277
+
278
+ ```
279
+ @online{MosaicML2023Introducing,
280
+ author = {MosaicML NLP Team},
281
+ title = {Introducing MPT-30B: Raising the bar
282
+ for open-source foundation models},
283
+ year = {2023},
284
+ url = {www.mosaicml.com/blog/mpt-30b},
285
+ note = {Accessed: 2023-06-22},
286
+ urldate = {2023-06-22}
287
+ }
288
+ ```
289
+
290
+ ***End of original Model File***
291
+ ---
292
+
293
+
294
+ ## Please consider to support my work
295
+ **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
296
+
297
+ <center>
298
+
299
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
300
+ [![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911)
301
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht)
302
+ [![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht)
303
+ [![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966)
304
+
305
+ </center>