File size: 1,599 Bytes
bb14b41 b41f5f8 bb14b41 b41f5f8 bb14b41 b41f5f8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
base_model:
- grimjim/zephyr-beta-wizardLM-2-merge-7B
- alpindale/Mistral-7B-v0.2-hf
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# madwind-wizard-7B
This is a merge of pre-trained 7B language models created using [mergekit](https://github.com/cg123/mergekit).
The intended goal of this merge was to combine the 32K context window of Mistral v0.2 base with the richness and strength of the Zephyr Beta and WizardLM 2 models. This was a mixed-precision merge, promoting Mistral v0.2 base from fp16 to bf16.
The result can be used for text generation. Note that Zelphr beta training removed in-built alignment from datasets, resulting in a model more likely to generate problematic text when prompted. This merge appears to have inherited that feature.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [grimjim/zephyr-beta-wizardLM-2-merge-7B](https://huggingface.co/grimjim/zephyr-beta-wizardLM-2-merge-7B)
* [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: alpindale/Mistral-7B-v0.2-hf
layer_range: [0,32]
- model: grimjim/zephyr-beta-wizardLM-2-merge-7B
layer_range: [0,32]
merge_method: slerp
base_model: alpindale/Mistral-7B-v0.2-hf
parameters:
t:
- value: 0.5
dtype: bfloat16
```
|