merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the della merge method using djuna/Q2.5-Fuppavy-7B as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Locutusque/StockQwen-2.5-7B
parameters:
weight: 0.5
density: 0.5
- model: happzy2633/qwen2.5-7b-ins-v3
parameters:
weight: 0.3
density: 1
- model: fblgit/cybertron-v4-qw7B-MGS
parameters:
weight: 1
density: 0.8
merge_method: della
base_model: djuna/Q2.5-Fuppavy-7B
parameters:
epsilon: 0.04
lambda: 1.05
dtype: float32
out_dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 27.08 |
IFEval (0-Shot) | 73.21 |
BBH (3-Shot) | 35.26 |
MATH Lvl 5 (4-Shot) | 0.08 |
GPQA (0-shot) | 6.38 |
MuSR (0-shot) | 11.07 |
MMLU-PRO (5-shot) | 36.47 |
- Downloads last month
- 10
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for djuna/Q2.5-Partron-7B
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard73.210
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard35.260
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard0.080
- acc_norm on GPQA (0-shot)Open LLM Leaderboard6.380
- acc_norm on MuSR (0-shot)Open LLM Leaderboard11.070
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard36.470