merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using artificialguybr/LLAMA3.2-1B-Synthia-II-Redmond as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: model_stock
models:
- model: pankajmathur/orca_mini_v9_6_1B-instruct
parameters:
weight: 1.0
- model: cognitivecomputations/Dolphin3.0-Llama3.2-1B
parameters:
weight: 1.0
base_model: artificialguybr/LLAMA3.2-1B-Synthia-II-Redmond
dtype: bfloat16
normalize: false
chat_template: auto
tokenizer:
source: union
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 5.87 |
IFEval (0-Shot) | 24.31 |
BBH (3-Shot) | 3.65 |
MATH Lvl 5 (4-Shot) | 2.34 |
GPQA (0-shot) | 2.01 |
MuSR (0-shot) | 1.60 |
MMLU-PRO (5-shot) | 1.29 |
- Downloads last month
- 18
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Nexesenex/Llama_3.2_1b_AquaSyn_0.11
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard24.310
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard3.650
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard2.340
- acc_norm on GPQA (0-shot)Open LLM Leaderboard2.010
- acc_norm on MuSR (0-shot)Open LLM Leaderboard1.600
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard1.290