Blabbertron-1.0 / README.md
bunnycore's picture
Adding Evaluation Results (#1)
75c3fb1 verified
---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- gz987/qwen2.5-7b-cabs-v0.3
- ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3
- bunnycore/Qwen2.5-7B-Instruct-Merge-Stock-v0.1
- Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
- Qwen/Qwen2.5-7B-Instruct
- bunnycore/Qwen-2.5-7b-s1k-lora_model
- gz987/qwen2.5-7b-cabs-v0.3
- bunnycore/Qwen-2.5-7b-rp-lora
- Qwen/Qwen2.5-7B-Instruct
- ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3
model-index:
- name: Blabbertron-1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 74.33
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Blabbertron-1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 36.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Blabbertron-1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 49.24
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Blabbertron-1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.94
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Blabbertron-1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.51
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Blabbertron-1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.27
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Blabbertron-1.0
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) + [ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3](https://huggingface.co/ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3) as a base.
### Models Merged
The following models were included in the merge:
* [gz987/qwen2.5-7b-cabs-v0.3](https://huggingface.co/gz987/qwen2.5-7b-cabs-v0.3) + [ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3](https://huggingface.co/ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3)
* [bunnycore/Qwen2.5-7B-Instruct-Merge-Stock-v0.1](https://huggingface.co/bunnycore/Qwen2.5-7B-Instruct-Merge-Stock-v0.1)
* [Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview)
* [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) + [bunnycore/Qwen-2.5-7b-s1k-lora_model](https://huggingface.co/bunnycore/Qwen-2.5-7b-s1k-lora_model)
* [gz987/qwen2.5-7b-cabs-v0.3](https://huggingface.co/gz987/qwen2.5-7b-cabs-v0.3) + [bunnycore/Qwen-2.5-7b-rp-lora](https://huggingface.co/bunnycore/Qwen-2.5-7b-rp-lora)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-7B-Instruct+bunnycore/Qwen-2.5-7b-s1k-lora_model
parameters:
weight: 0.3
- model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
- model: bunnycore/Qwen2.5-7B-Instruct-Merge-Stock-v0.1
- model: gz987/qwen2.5-7b-cabs-v0.3+ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3
- model: gz987/qwen2.5-7b-cabs-v0.3+bunnycore/Qwen-2.5-7b-rp-lora
base_model: Qwen/Qwen2.5-7B-Instruct+ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3
merge_method: model_stock
parameters:
dtype: bfloat16
tokenizer_source: Qwen/Qwen2.5-7B-Instruct
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/bunnycore__Blabbertron-1.0-details)
| Metric |Value|
|-------------------|----:|
|Avg. |36.22|
|IFEval (0-Shot) |74.33|
|BBH (3-Shot) |36.05|
|MATH Lvl 5 (4-Shot)|49.24|
|GPQA (0-shot) | 6.94|
|MuSR (0-shot) |13.51|
|MMLU-PRO (5-shot) |37.27|