merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using unsloth/qwen2.5-32b-instruct as a base.
Models Merged
The following models were included in the merge:
- huihui-ai/QwQ-32B-Preview-abliterated
- AXCXEPT/EZO-Qwen2.5-32B-Instruct
- AiCloser/Qwen2.5-32B-AGI
- crestf411/Q2.5-32B-Slush
- ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
- nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
- rombodawg/Rombos-LLM-V2.5-Qwen-32b
- unsloth/qwen2.5-32b-instruct + AITRICS-VD/moca_impression_dataset_0923-Qwen2.5-32B-Instruct-sft-lora
- EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
Configuration
The following YAML configuration was used to produce this model:
models:
- model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
parameters:
weight: 1.0
density: 0.85
- model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
parameters:
weight: 0.28
density: 0.75
- model: crestf411/Q2.5-32B-Slush
parameters:
weight: 0.25
density: 0.74
- model: AXCXEPT/EZO-Qwen2.5-32B-Instruct
parameters:
weight: 0.2
density: 0.7
- model: ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
parameters:
weight: 0.22
density: 0.71
- model: unsloth/qwen2.5-32b-instruct+AITRICS-VD/moca_impression_dataset_0923-Qwen2.5-32B-Instruct-sft-lora
parameters:
weight: 0.19
density: 0.69
- model: huihui-ai/QwQ-32B-Preview-abliterated
parameters:
weight: 0.16
density: 0.67
- model: nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
parameters:
weight: 0.12
density: 0.6
- model: AiCloser/Qwen2.5-32B-AGI
parameters:
weight: 0.14
density: 0.66
merge_method: dare_ties
base_model: unsloth/qwen2.5-32b-instruct
parameters:
density: 0.84
epsilon: 0.07
lambda: 1.24
dtype: bfloat16
tokenizer_source: union
- Downloads last month
- 10
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Kaoeiri/Qwenwify-32B-v2
Merge model
this model