merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using Qwen/Qwen2.5-7B-Instruct + ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Qwen/Qwen2.5-7B-Instruct+bunnycore/Qwen-2.5-7b-s1k-lora_model
    parameters:
      weight: 0.3
  - model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
  - model: bunnycore/Qwen2.5-7B-Instruct-Merge-Stock-v0.1
  - model: gz987/qwen2.5-7b-cabs-v0.3+ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3
  - model: gz987/qwen2.5-7b-cabs-v0.3+bunnycore/Qwen-2.5-7b-rp-lora
base_model: Qwen/Qwen2.5-7B-Instruct+ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3
merge_method: model_stock
parameters:
dtype: bfloat16
tokenizer_source: Qwen/Qwen2.5-7B-Instruct

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 36.22
IFEval (0-Shot) 74.33
BBH (3-Shot) 36.05
MATH Lvl 5 (4-Shot) 49.24
GPQA (0-shot) 6.94
MuSR (0-shot) 13.51
MMLU-PRO (5-shot) 37.27
Downloads last month
36
Safetensors
Model size
7.61B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for bunnycore/Blabbertron-1.0

Evaluation results