merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using FuseAI/FuseChat-Llama-3.1-8B-SFT as a base.
Models Merged
The following models were included in the merge:
- DreadPoor/LemonP-8B-Model_Stock
- SentientAGI/Dobby-Mini-Leashed-Llama-3.1-8B
- Nexesenex/Llama_3.1_8b_DodoWild_v2.01
- djuna/L3.1-Romes-Ninomos
- Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
- model: DreadPoor/LemonP-8B-Model_Stock
- model: SentientAGI/Dobby-Mini-Leashed-Llama-3.1-8B
- model: djuna/L3.1-Romes-Ninomos
- model: Nexesenex/Llama_3.1_8b_DodoWild_v2.01
merge_method: model_stock
base_model: FuseAI/FuseChat-Llama-3.1-8B-SFT
normalize: false
filter_wise: true
chat_template: "auto"
int8_mask: true
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 30.02 |
IFEval (0-Shot) | 80.96 |
BBH (3-Shot) | 32.62 |
MATH Lvl 5 (4-Shot) | 17.75 |
GPQA (0-shot) | 7.49 |
MuSR (0-shot) | 9.55 |
MMLU-PRO (5-shot) | 31.73 |
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for DreadPoor/ichor_1.1-8B-Model_Stock
Merge model
this model
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard80.960
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard32.620
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard17.750
- acc_norm on GPQA (0-shot)Open LLM Leaderboard7.490
- acc_norm on MuSR (0-shot)Open LLM Leaderboard9.550
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard31.730