merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using ZeroXClem/Qwen-2.5-Aether-SlerpFusion-7B + bunnycore/Qwen-2.5-7b-rp-lora as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method:        model_stock
base_model:          ZeroXClem/Qwen-2.5-Aether-SlerpFusion-7B+bunnycore/Qwen-2.5-7b-rp-lora
tokenizer_source:    base
dtype:               float32
out_dtype:           bfloat16
parameters:
  int8_mask:         true
  normalize:         true
  rescale:           false
models:
  - model:           deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
  - model:           ZeroXClem/Qwen-2.5-Aether-SlerpFusion-7B
  - model:           ZeroXClem/Qwen-2.5-Aether-SlerpFusion-7B+bunnycore/Qwen-2.5-7b-rp-lora
  - model:           Sakalti/light-7b-beta
  - model:           fblgit/cybertron-v4-qw7B-MGS+bunnycore/Qwen-2.5-7b-rp-lora
  - model:           bespokelabs/Bespoke-Stratos-7B

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 27.38
IFEval (0-Shot) 56.95
BBH (3-Shot) 34.08
MATH Lvl 5 (4-Shot) 25.53
GPQA (0-shot) 3.69
MuSR (0-shot) 9.96
MMLU-PRO (5-shot) 34.06
Downloads last month
46
Safetensors
Model size
7.61B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for bunnycore/Qwen-2.5-7B-Deep-Stock-v1

Evaluation results