metadata
base_model:
- Hasnonname/Qwen2.5-14B-Wheatear-v0
- Hasnonname/Qwen2.5-14B-Kestrel-v0
library_name: transformers
tags:
- mergekit
- merge
Qwen2.5-14B-Kebab-v0
methodology taken from Aletheia v1
hyperparams from Sugarquill v1
dataset consists of creative writing, multiturn RP, and some general assistant tasks
quants available here: Qwen2.5-14B-Kebab-v0-GGUF
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: Hasnonname/Qwen2.5-14B-Wheatear-v0
dtype: bfloat16
merge_method: slerp
parameters:
t:
- value: 0.7
slices:
- sources:
- layer_range: [0, 48]
model: Hasnonname/Qwen2.5-14B-Kestrel-v0
- layer_range: [0, 48]
model: Hasnonname/Qwen2.5-14B-Wheatear-v0