Update README.md
Browse filesAdded link to quants
README.md
CHANGED
@@ -1,49 +1,52 @@
|
|
1 |
-
---
|
2 |
-
base_model:
|
3 |
-
- grimjim/zephyr-beta-wizardLM-2-merge-7B
|
4 |
-
- alpindale/Mistral-7B-v0.2-hf
|
5 |
-
library_name: transformers
|
6 |
-
tags:
|
7 |
-
- mergekit
|
8 |
-
- merge
|
9 |
-
license: cc-by-nc-4.0
|
10 |
-
pipeline_tag: text-generation
|
11 |
-
---
|
12 |
-
# madwind-wizard-7B
|
13 |
-
|
14 |
-
This is a merge of pre-trained 7B language models created using [mergekit](https://github.com/cg123/mergekit).
|
15 |
-
|
16 |
-
The intended goal of this merge was to combine the 32K context window of Mistral v0.2 base with the richness and strength of the Zephyr Beta and WizardLM 2 models. This was a mixed-precision merge, promoting Mistral v0.2 base from fp16 to bf16.
|
17 |
-
|
18 |
-
The result can be used for text generation. Note that
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- grimjim/zephyr-beta-wizardLM-2-merge-7B
|
4 |
+
- alpindale/Mistral-7B-v0.2-hf
|
5 |
+
library_name: transformers
|
6 |
+
tags:
|
7 |
+
- mergekit
|
8 |
+
- merge
|
9 |
+
license: cc-by-nc-4.0
|
10 |
+
pipeline_tag: text-generation
|
11 |
+
---
|
12 |
+
# madwind-wizard-7B
|
13 |
+
|
14 |
+
This is a merge of pre-trained 7B language models created using [mergekit](https://github.com/cg123/mergekit).
|
15 |
+
|
16 |
+
The intended goal of this merge was to combine the 32K context window of Mistral v0.2 base with the richness and strength of the Zephyr Beta and WizardLM 2 models. This was a mixed-precision merge, promoting Mistral v0.2 base from fp16 to bf16.
|
17 |
+
|
18 |
+
The result can be used for text generation. Note that Zephyr Beta training removed in-built alignment from datasets, resulting in a model more likely to generate problematic text when prompted. This merge appears to have inherited that feature.
|
19 |
+
|
20 |
+
- Full weights: [grimjim/madwind-wizard-7B](https://huggingface.co/grimjim/madwind-wizard-7B)
|
21 |
+
- GGUF quants: [grimjim/madwind-wizard-7B-GGUF](https://huggingface.co/grimjim/madwind-wizard-7B-GGUF)
|
22 |
+
|
23 |
+
## Merge Details
|
24 |
+
### Merge Method
|
25 |
+
|
26 |
+
This model was merged using the SLERP merge method.
|
27 |
+
|
28 |
+
### Models Merged
|
29 |
+
|
30 |
+
The following models were included in the merge:
|
31 |
+
* [grimjim/zephyr-beta-wizardLM-2-merge-7B](https://huggingface.co/grimjim/zephyr-beta-wizardLM-2-merge-7B)
|
32 |
+
* [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf)
|
33 |
+
|
34 |
+
### Configuration
|
35 |
+
|
36 |
+
The following YAML configuration was used to produce this model:
|
37 |
+
|
38 |
+
```yaml
|
39 |
+
slices:
|
40 |
+
- sources:
|
41 |
+
- model: alpindale/Mistral-7B-v0.2-hf
|
42 |
+
layer_range: [0,32]
|
43 |
+
- model: grimjim/zephyr-beta-wizardLM-2-merge-7B
|
44 |
+
layer_range: [0,32]
|
45 |
+
merge_method: slerp
|
46 |
+
base_model: alpindale/Mistral-7B-v0.2-hf
|
47 |
+
parameters:
|
48 |
+
t:
|
49 |
+
- value: 0.5
|
50 |
+
dtype: bfloat16
|
51 |
+
|
52 |
+
```
|