merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using artificialguybr/LLAMA3.2-1B-Synthia-II-Redmond as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: model_stock
models:
- model: hyunseoki/llama3.2-1b-Open-R1-GRPO-test0
parameters:
weight: 1.0
- model: cognitivecomputations/Dolphin3.0-Llama3.2-1B
parameters:
weight: 1.0
base_model: artificialguybr/LLAMA3.2-1B-Synthia-II-Redmond
dtype: bfloat16
normalize: true
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 5.52 |
IFEval (0-Shot) | 21.97 |
BBH (3-Shot) | 4.74 |
MATH Lvl 5 (4-Shot) | 2.04 |
GPQA (0-shot) | 0.00 |
MuSR (0-shot) | 1.91 |
MMLU-PRO (5-shot) | 2.49 |
- Downloads last month
- 32
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Nexesenex/Llama_3.2_1b_Sydonia_0.1
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard21.970
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard4.740
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard2.040
- acc_norm on GPQA (0-shot)Open LLM Leaderboard0.000
- acc_norm on MuSR (0-shot)Open LLM Leaderboard1.910
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard2.490