merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using NousResearch/DeepHermes-3-Llama-3-8B-Preview as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: model_stock
models:
- model: meditsolutions/Llama-3.1-MedIT-SUN-8B
parameters:
weight: 1.0
- model: huihui-ai/Dolphin3.0-Llama3.1-8B-abliterated
parameters:
weight: 1.0
base_model: NousResearch/DeepHermes-3-Llama-3-8B-Preview
dtype: bfloat16
normalize: true
chat_template: auto
tokenizer:
source: union
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 23.81 |
IFEval (0-Shot) | 50.01 |
BBH (3-Shot) | 31.13 |
MATH Lvl 5 (4-Shot) | 17.75 |
GPQA (0-shot) | 4.36 |
MuSR (0-shot) | 12.64 |
MMLU-PRO (5-shot) | 26.96 |
- Downloads last month
- 50
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Nexesenex/Llama_3.1_8b_Hermedive_R1_V1.01
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard50.010
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard31.130
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard17.750
- acc_norm on GPQA (0-shot)Open LLM Leaderboard4.360
- acc_norm on MuSR (0-shot)Open LLM Leaderboard12.640
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard26.960