Mistral Instruct Merges
Collection
Merge of Mistral Instruct 1 and 2 using different mergekit techniques
•
6 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit using the mixtral
branch.
This is an experimental model and has nothing to do with Mixtral. Mixtral is not a merge of models per se, but a transformer with MoE layers learned during training
This uses a random gate, so I expect not great results. We'll see!
This model was merged using the MoE merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: mistralai/Mistral-7B-Instruct-v0.2
gate_mode: random
dtype: bfloat16
experts:
- source_model: mistralai/Mistral-7B-Instruct-v0.2
positive_prompts: [""]
- source_model: mistralai/Mistral-7B-Instruct-v0.1
positive_prompts: [""]
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 61.39 |
AI2 Reasoning Challenge (25-Shot) | 61.01 |
HellaSwag (10-Shot) | 81.55 |
MMLU (5-Shot) | 58.22 |
TruthfulQA (0-shot) | 60.40 |
Winogrande (5-shot) | 76.09 |
GSM8k (5-shot) | 31.08 |