|
--- |
|
base_model: |
|
- alpindale/WizardLM-2-8x22B |
|
- openbmb/Eurux-8x22b-nca |
|
- openbmb/Eurux-8x22b-kto |
|
- fireworks-ai/mixtral-8x22b-instruct-oh |
|
- migtissera/Tess-2.0-Mixtral-8x22B |
|
- mistralai/Mixtral-8x22B-v0.1 |
|
tags: |
|
- mergekit |
|
- merge |
|
--- |
|
# WizardLM-2-8x22B-model_stock |
|
|
|
A [mergekit](https://github.com/arcee-ai/mergekit) model_stock merge made with the aim of improving WizardLM-2-8x22B. |
|
The resulting model suppresses WizardLM-2-8x22B's overly flowery and positive writing style whilst retaining useful features such as CoT. Extremely coherant even at long contexts, and benched above the WLM-2 base in intelligence tests. |
|
Use vicuna prompt as per WizardLM-2-8x22B base. |
|
|
|
exllamav2 [measurement.json](./measurement.json) |
|
|
|
mergekit_config.yml |
|
```yml |
|
models: |
|
- model: alpindale/WizardLM-2-8x22B |
|
- model: openbmb/Eurux-8x22b-kto |
|
- model: openbmb/Eurux-8x22b-nca |
|
- model: mistralai/Mixtral-8x22B-v0.1 |
|
- model: migtissera/Tess-2.0-Mixtral-8x22B |
|
- model: fireworks-ai/mixtral-8x22b-instruct-oh |
|
base_model: alpindale/WizardLM-2-8x22B |
|
merge_method: model_stock |
|
dtype: bfloat16 |
|
``` |
|
|
|
Likely won't be uploading the full weights myself due to bandwidth limitations. |
|
|