metadata
base_model:
- Sao10K/32B-Qwen2.5-Kunou-v1
- Kaoeiri/Qwenwify-32B-v3
- Qwen/QwQ-32B-Preview
- Dans-DiscountModels/Qwen2.5-32B-ChatML
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
license: cc-by-nc-nd-4.0
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using Qwen/QwQ-32B-Preview as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Kaoeiri/Qwenwify-32B-v3 # Backbone model with 11 pre-merged models
parameters:
weight: 1.0
density: 0.911
- model: Sao10K/32B-Qwen2.5-Kunou-v1 # Synthetic roleplay model with scenarios
parameters:
weight: 0.15
density: 0.814
- model: Dans-DiscountModels/Qwen2.5-32B-ChatML # Reasoning and conversational model
parameters:
weight: 0.30
density: 0.871
merge_method: dare_ties
base_model: Qwen/QwQ-32B-Preview # Latest Qwen model for Q&A tasks
parameters:
density: 0.90 # Target density for the merged model
epsilon: 0.05 # Small adjustment factor for fine-tuning
lambda: 1.35 # Balancing factor for weight distribution
dtype: bfloat16 # Efficient precision format for memory optimization
tokenizer_source: union # Combines tokenizers from all models to maximize vocabulary coverage